Friday, January 19, 2024

Deep Learning and Solving NP-Complete Problems

This recent article about AlphaGeometry being able to solve Olympiad geometry proof problems brings to my attention that there have been active research in recent years to combine deep learning with theorem proving. The novelty of AlphaGeometry is that it specializes in constructing geometry proofs, using synthetic training data guided by the theorem prover, which previous approaches have had difficulty with.

As a formal method like type checking of programming languages, theorem proving tries to overcome the ambiguity of mathematical proofs written in a natural language by formalizing proof terms using an algebraic language with rigorous semantics (e.g. Lean), so that proofs written in the language can be verified algorithmically for correctness. In the past, proofs had to be tediously written by humans, as computers had great difficulty constructing the proofs. Computers can do exhaustive proof search, but the search space blows up exponentially on the number of terms, which renders the exhaustive search intractable. However, once the proof is constructed, verification is quick. Problems that are exponentially intractable to solve but easy to verify are called NP-complete.

Traditional proof searching limits the search space to simple deductions, which can offload some tedium from manually writing the proofs. In recent years, machine learning is used to prune or prioritize the search space, making it possible to write longer proofs and solve larger problems. It is now possible for computers to construct complete mathematical proofs for a selected number of high school level math. The idea is a natural extension to what AlphaGo does to prune the game's search space, which I have written about before.

This is quite promising for a programming language that embeds theorem proving in its type system to express more elaborate safety properties, such as Applied Type System (ATS) designed by my advisor, which I have written about before. Programs written in ATS enjoy a high degree of safety: think of it as a super-charged Rust, even though ATS predates it. Now that generative AI can help people write Rust, a logical next step would be to create a generative AI for ATS.

I would be interested to see how machine learning can be applied to solve other NP-complete problems in general, such as bin-packing, job shop scheduling, or the traveling salesman problem. However, machine learning will face fierce competition here, as these problems are traditionally solved using approximation algorithms, which comes up with a close-enough solution using fast heuristics. Also, only a few of them have real-world applications. Admittedly, none are as eye catching as "AI is able to solve math problems" giving the impression of achieving general intelligence; in reality, the general intelligence is built into the theorem proving engine when it is used in conjunction with machine learning.

However, I think proof construction is a very prolific direction that machine learning is heading. Traditionally, machine learning models are a black box: even though it can "do stuff," it could not rationalize how to do it, let alone teaching a human or sharing the same insight with a different machine learning model. With proof construction, at least the proofs are human readable and have a universal meaning, which means we can actually learn something from the new insights gained by machine learning. It is still not general intelligence, and the symbolic insight to be gained might very well be a mere probabilistic coincidence, but it does allow machine learning to begin contributing to human knowledge.

Sunday, January 7, 2024

So many camera systems! My thoughts.

My camera journey is a little meandering, but it can roughly be roughly summarized as "all I want is the perfect camera."

My first camera is a Canon PowerShot A50, bought in 1999, which is a compact CCD camera. It can do 10-bit RAW, which I later discovered allowed me to correct underexposed photos more easily, and the colors actually looked decent! The camera writes to an original CompactFlash. Without a reader, you'd have to transfer images out of the camera with a serial port dongle which is very slow (115200 baud, or ~14 kbps). I later sold it at a fundraiser.

Next I had a Sanyo Xacti VPC-HD1000, bought in 2007, which is a camcorder that can shoot 720p60, 1080p30, or 1080i60 video. It also came with a strobe light for photos. It had some very rudimentary electronic stabilization that didn't work very well. I tried Jonny Lee's Poor Man Steadicam hack, but I was still not happy with the result. After a few years of use, the sensor suddenly went grey on me, and eventually it was disposed through electronics recycling.

My next camcorder is a Panasonic HC-V750, bought in 2014, which can do 1080p. It had great image stabilization out of the box, but the color was a little washed out, so I never really used it very much.

My first somewhat serious camera is the Lytro Illum. Bought in 2015, it was the first computational camera that will let you adjust focus and depth of field after a picture has been taken. It was ahead of its time. Even though it had a 40MP sensor, the pixels had to record depth, so the final image was only about 10MP, and it was very noisy in low light. Nonetheless, it was a great tool to study depth of field in composition. I still have it, but after Google acquired Lytro, they discontinued support. The software still works on an Intel Mac, and it should work on the Apple ARM silicon for now using Rosetta 2 emulation, but its years are numbered. The Lytro software on Windows may have a better luck!

In 2018, I decided I want an interchangeable lens camera that can shoot video. At that time, the Panasonic GH5S was one of the first that can shoot 4K 30p 10-bit in camera, although the 10-bit video is not natively supported by Mac OS even til this day (VLC/FFmpeg supports it). In the next few years, I also started a collection of Micro Four-Thirds lenses. I will talk about them later.

I was pretty happy with the GH5S, except the contrast detection autofocus is not very good. The lack of IBIS also meant I could not do hand-held video very much. Some lenses have OIS, but they are not quite enough. I bought a gimbal, but it attracted enough attention that I was asked to leave the premise a few times when out shooting. Also, since Mac OS does not natively support the 10-bit H.264 from the GH5S, I have to transcode them to ProRes when editing the videos, which is a pain. As a result, I used it more for stills photography. The 14-bit RAW is great for color grading, though the sensor is limited to 10.2 MP.

The 14-bit RAW at 10.2 MP means the GH5S is excellent for taking time lapses. The camera's built-in conversion only resulted in a lower-resolution 8-bit video, which I ended up using more due to laziness, but the RAW files can be converted to DNG using Adobe DNG Converter and edited in Davinci Resolve for the most pristine quality time lapse.

In 2022, I bought two more cameras which overcompensated for the want of image stabilization and the want of megapixels.

The camera that overcompensated for my want of image stabilization is the GoPro Hero 10. Although the image stabilization is decent, the 4K 60p video is very noisy even during daytime in the shade. The 5.3K cinematic video is probably better, but it can still only record in 8-bit video (their next release later in 2022, Hero 11, records in 10-bit). The GoPro also tends to overheat after 20 minutes or so of continuous recording. This is my first camera that I have to worry about overheating, and sadly isn't the last camera. As a result, I did not end up using it very much.

The other camera that overcompensated for my want of megapixels is the Fujifilm GFX 100S, which is a whopping 102 megapixels. It is very satisfying to keep enlarging the picture and play a game of "where's waldo" with all the details. However, when pixel peeing, I can see some issues with chromatic aberration and corner softness at F/4 with the 32-64mm zoom lens. I don't have a different lens to know if this is a common issue. I've seen some photographers stopping down to F/22 which tends to make the lens sharper for the high resolution.

Like its predecessor GFX 100, the 100S can shoot 4Kp30 10-bit videos but is more compact. The phase-detect autofocus works great for me. The image stabilization is excellent both for photo and video, although in some situation the video can momentarily have a wobbling effect. Above all, I am most impressed with the colors. After experimenting with a few settings, combining the DR-P (dynamic range priority) mode with Velvia film simulation gave me the best result. The DR-P on its own would give a color that is too flat, but the Velvia simulation restored some contrast; Velvia simulation on its own tends to be too contrasty, since originally it is a color reversal film that only had 5 stops of dynamic range. Using DR-P and Velvia together balanced out the colors very well.

Although I shoot in RAW+JPEG, the out-of-camera colors are so good that my attempt to color grade the RAW usually results in slightly worse colors! The film simulation can also be applied to the 10-bit videos, and I am very impressed with the video colors as well. The video is 10-bit in H.265, which is natively supported by Mac OS. The 16-bit RAW from GFX 100S are still about 100MB a piece after lossless compression, so I find myself really slowing down and be more meticulous with the composition when taking the photos. However, it still attracted unwanted attention. I was also asked to leave the premise a few times when shooting.

Towards the end of 2023, I bought two more cameras that overcompensated for the avoidance of unwanted attention.

The first was the Sony A6700. It is slightly larger than the ZV-E1, but the A6700 in APS-C has more compact lenses than the ZV-E1 which is full-frame. The video looks very clean at 4K 60p. The low-light performance is great. Clear image zoom (digital zoom) also looks excellent. Not to mention, Sony has the best autofocus. However, the image stabilization of Sony is lacking: the footage is usable when standing still, but unusable when walking, which means I would have to use the gimbal again. It also tends to overheat after 20 minutes, but the Ulanzi external fan helped.

For the second camera, I decided to try the DJI Osmo Pocket 3. I am very impressed with the low light performance. At night, its default setting tends to overexpose the image, but I find that -2 EV compensation looked the best. The 4K has a lot of details at no zoom, but will begin to lose detail with digital zoom. Autofocus is reasonable, although I do notice the occasional hunting. The image stabilization is also impeccable being a gimbal itself, and it fits into the palm of my hand! I think this has the best potential for avoiding unwanted attention.

In reality, I probably will have to use the Pocket 3 and A6700 interchangeably: shoot with Pocket 3 when walking, and A6700 with the 16-50mm lens if I need to zoom.

My most horror with the A6700 happened when a friend asked me to do a photoshoot for their family, and they insisted that I use "my latest camera." After being spoiled with Fujifilm, I was unimpressed with Sony's S-Cinetone which is supposed to be their best color profile. However, when color grading the RAW files, I have to start from the terrible lens distortion and vignetting from the pancake lens I used. Remember I bought this camera for the compactness, not for the glass! Correcting the lens defects in post is doable, which Sony can also do in camera very well, but it took me hours. This is probably why the Sony lets me apply my own LUTs in camera, and many photographers sell their LUTs online.

So my experience with cameras is indeed quite meandering, but I will summarize my thoughts on camera systems here.


I purposefully avoided Canon throughout the years. I will give them credit that they have a respectable camera system with a wide selection of EF mount lenses for DSLR and now RF mount lenses for mirrorless cameras. Also respectably, Canon still develops their own sensors with reasonably good colors, while most other cameras just use Sony sensors. However, I avoided Canon for two reasons:

  • Back in the days, they have deliberately crippled the video capabilities of their photography cameras to avoid cannibalizing on their cinematic camera line. This changed with the Canon EOS R5 which can shoot 8K internally in RAW. Although R5 has overheating issues, the R5C addressed it with an internal fan.
  • Although there are many third-party EF lenses and third-party cameras supporting the EF mount, they are reverse-engineered and not officially supported by Canon. Furthermore, Canon started being litigious with third-party RF lenses.

This means that photographers are buying into the Canon walled-garden and have to put up with their superficial product segmenting which could come again in the future if not now.


I really don't have much experience with Nikon. I know some people are happy with it. That's about it.

Micro Four-Thirds

I have the most experience with Micro Four-Thirds, which includes cameras and lenses made by Panasonic and OM Systems (was Olympus). Panasonic lenses are pretty good for video, but I like the Olympus lenses best for their superior optical quality and bokeh. I am absolutely in love with the clutch manual focus of Olympus lenses: clutching the focus ring will engage in mechanical manual focus, and de-clutching it will allow autofocus as well as focus by wire. It gives you a taste of pulling manual focus like a cine lens, but Olympus lenses only have 90° of focus pull rotation compared to the 200° for cine lenses, which makes it a little too sensitive for cinematography.

Micro Four-Thirds sensor size is 1/4 the surface area of full-frame with exactly 2x crop factor, so focal length conversion is pretty straightforward. Due to the smaller sensor, the native telephoto lenses are miniature compared to the full-frame counterparts. The sensor size works well with Super35 (APS-C) cinema lenses. It is also possible to use the camera with B4 ENG lenses made for 2/3" sensor with the 2x teleconverter. In theory, this makes this camera system the most versatile and diverse, but throughout the years it had a lot of unrealized potential.

First is that the cameras were late to the phase-detect autofocus. Olympus had PDAF first, but their cameras could not shoot 10-bit video. Panasonic only recently released G9 II with PDAF. Even so, the G9 II body is exactly the same size as the full-frame S5 IIx. I feel that Micro Four-Thirds makes more sense with compact cameras and pancake lenses, but so far Sony is eating their lunch on compactness.


Another camera system of interest is the L-Mount, which is originally designed by Leica but now includes Panasonic, Sigma and DJI in a closed consortium. It has great lens selection in full-frame and APS-C format, and the APS-C system can potentially be very compact as well. Leica cameras and lenses are premium, but Panasonic also has great cameras and lenses for video.

It is a promising system with good vendor diversity.

Sony E-Mount

Sony makes the vast majority of camera sensors, so there is no question about the image quality there. I am just not personally a fan of their color science.

Sony is also reasonable when it comes to licensing E-Mount to other lens makers, so there is a great diversity of lenses from Sigma, Voigtlander, and even Fujinon. However, the E-Mount is fundamentally designed for APS-C sensor size and not for full-frame, as the throat-diameter partially occludes the corners of a full-frame sensor. This can become a challenge in lens design where the distortion and vignetting will have to be corrected in camera or in post. Lens distortion and vignetting is prominent even in their high-end FE G-master lenses (e.g. FE 20-70mm F/4 G, FE 20mm F/1.8 G).

This means photographers who want to work with RAW will find themselves spending more time in post-processing. They could save some time if they save their own LUTs to the camera and let the camera do both lens and color correction, which explains why Sony cameras seem to have a thriving after-market of LUTs.

Sony makes almost all E-Mount cameras, but it is not easy to choose the right camera, as they have many lines of camera segmented for every need and every price point. This is as far as I could gather about their camera lineup, as of 2023.

PurposeFull FrameAPS-C
CinemaFX3, FX6, FX9FX30
PhotographyA7R V, A1A6700
VideographyA7C II, A9 IIIn/a

Some of the cameras share the same sensor, for example FX3 and ZV-E1 share the same full-frame sensor, and FX30 and A6700 share the same APS-C sensor. Apart from the obvious form-factor difference, newer cameras tend to have more computational features that are missing from older cameras. Sony is known for neglecting their firmware updates.

The A6700 is a more recent camera that is supposed to have the most intuitive user interface, but I still find myself confused. For example, if I shoot RAW photos, it also disables Clear Image Zoom for video. H.265 only has 60p or 24p, but 30p is missing. Sony also does not use the common names for video codecs: H.264 is called XAVC S, and H.265 is called XAVC HS. The camera has three separate modes: photo, video, and S&Q (slow and quick), but the custom presets in these modes are all completely independent. In photo mode, the video button can still record video, but in video mode the shutter button is disabled. None of it makes sense!

As far as I can tell, all the non-cinema cameras can overheat when shooting long video, and that is probably by design. Sony cameras tend to be the most compact, which makes thermal management more challenging.

Fujifilm / Fujinon

Fujifilm is the camera brand, while Fujinon is the lens brand. Fuji has two separate systems: XF mount for APS-C, and G mount for medium format. Sigma and Tokina also make third-party XF lenses. Venus Laowa also makes some G mount lenses. However, the vast majority of XF and G lenses are made by Fujinon, so the selection is more limited, though you should be able to find a lens for most focal length and aperture.

What Fujifilm is famous for is its legendary film simulation in camera, as I have mentioned before. The film simulation profiles are based on color analysis of real film stock: Astia, Provia, Velvia, Eterna, Acros, etc. I think they are successful not so much because of the nostalgic factor, but because the colors from the original films were already time-proven.

They recently released GFX 100 II which can shoot 8K, but due to the computational requirement to scale 100 MP down to 8K, the 8K is cropped from the 35mm image area which is a shame. I actually think that a GFX 40 MP 8K makes a lot of sense. They already have X-H2 which is 40MP in APS-C that can shoot 8K non-cropped. I would be happy buying into their XF system in the future.

Action Cameras, etc.

Action cameras like the GoPro can be fun to use for sports and travel and fairly easy to carry around as an extra camera just in case or for behind-the-scenes footage. They are more durable than camera phones and can stand a lot of abuse. However, the GoPro market share is increasingly getting eroded by Insta360, DJI, and other Chinese companies. Action cameras are also known to have poor low-light performance. The DJI Osmo Pocket 3 is probably going to eat their lunch, except it is not weather-proof or water-proof. People who use action cameras will probably always juggle multiple cameras anyways.


If you ask for my recommendation, my favorite camera system by far is still the Fujifilm for the legendary color science alone. It is such a joy to shoot.

I still think that the Micro Four-Thirds has a lot of potential. I would not mind replacing my GH5S with the G9 II when it breaks, but right now I do not see a need to upgrade. I would like to see more compact cameras and pancake lenses.

Sony is probably fine for casual video, but I don't recommend it for photographers because of the time they would need to spend in post-processing to correct the lens distortion and colors. Sony cameras are also not as intuitive to use, but I suppose one can get used to it.

Although my opinion about the Sony camera system is not favorable, this is just speaking from my own experience with some influence from the research done by others. In the end, photography is highly subjective to personal preference.