Quick link: Knowing what your algorithm thinks it knows

2023-11-06

The paper: What's in a Prior? Learned Proximal Networks for Inverse Problems.

What it does: This will be clearer with a bit of context, and step by step:

That was a lot for a quick link. Why does it matter? It's a neat technique that's worth exploring by anybody training models from scratch — I'm pretty sure we'll see quite a bit of that in the next years — but it's also a good strategic reminder that the opacity of contemporary models is a technical side effect of how we build them, not something inherent in the problem itself. Regulatory and practice frameworks built upon the assumption that any sufficiently powerful AI will have to be an unreadable black box might become obsolete sooner rather than later. Truth is, we've just started to figure out this style of software building, and our hardware, data, enthusiasm, and money outpace our experience and understanding. This won't last forever. The future of AI building is likely to be not just more powerful but also more transparent and better understood. The unreadable complexity of our models isn't a sign of sophistication but a reminder of their still experimental nature.