Computational Photography – or revolutionary changes in a conservative industry

Computational Photography – or revolutionary changes in a conservative industry

9. Oktober 2019 0 Von Horst Buchwald

Computational Photography – or revolutionary changes in a conservative industry

From Horst Buchwald

Part 1

Do you also take pictures with a smartphone? Or are you on the point of view, that’s just snapping, a professional photo is possible only with a professional camera? Could it be that this attitude is soon forgotten by the advance of AI?

In some journals occasionally the term „computational photography“ stands out. Many overlook him, because there is no German Pendent. Admittedly, the term sounds misleading, because not the computer controls the photography, but the artificial intelligence. In other words, it’s not about teaching the computer to think with the aid of AI, but rather taking better pictures of the smartphone. Even more accurate: Even ordinary people should be able to take professional photos with an AI-based camera.

Now the professional photographer smiles. His answer: „No chance. Dream on. „Is he right?

Google Clips will soon be the first prank on the market. First of all, some preliminary remarks so that you can better understand the new camera. As you know, the biggest shortcoming of previous smartphone cameras is a missing zoom lens. Computational photography is an attempt to replicate an optical zoom. But Google has one more important goal: Clips is controlled by an algorithm that records and stores „memorable moments.“ This means: Clips determines which situations in your life are „unforgettable moments“ and launches. So you do not have to press the trigger. The following image processing also takes over the camera. It decides where blurs balance, a color balance is necessary or a cut is made.

Computational Photography can now also be defined: it is a photographic technique that uses algorithms to photograph predetermined concrete situations and then improve the image quality.

Now back to the Google Clips. She does not do miracles. Google speaks of snapshots within the family. But also the monitoring at home (children, animals) should be possible with it. These are two different situations. If the camera takes snapshots, then only as soon as it registers an „unforgettable situation“. However, child and animal surveillance is not, as it seems, part of the algorithm. Or is it? It also states that the camera only reacts when „good“ snapshots are possible. What is good“? Opinions often differ widely. Once accepted, Google is aware of this question, what does that mean? Google relies on deep learning – the algorithm would learn what the family X, Y. Z, etc. consider „good“.

Deep Learning becomes possible because the camera does not take individual photos, but clips of several seconds in length (ie + -10 photos). The term „snapshot“ is therefore incorrect. Because the user must now search the clips and manually pick out the „good“ photos.

According to Google, the camera does not require an Internet connection and would not transmit any data.

Finally, some technical data: the 12 – megapixel camera covers a sufficiently large area with a 130 – degree wide – angle lens. That’s also necessary, because no one aligns the camera and presses the trigger. A drawback is the battery can withstand under heavy load probably just three hours. Another limitation is that access to the photos is only possible with a smartphone. At the start, these are only the two Samsung models Galaxy S 7 and Galaxy S 8 and the iPhone 8 and iphone 8 Plus from Apple. The camera is only available in the US for $ 250. Allegedly, it should not be offered in Germany.

(to be continued).