AI is here to stay. What does it mean for programming?
No matter what will happen in the future, AI is here to stay. Even when the VC money all dries out, even when the (so claimed) bubble bursts, even if a new AI winter arrives, the changes that occurred in the world since 2022 and continue, will still be here. The tech world of 2030 will be different than the world of 2020 and that’s not only because we were all WFH in 2020 due to a pandemic.
Programming is as old as time. As the first real article on this blog mentions, the ancient people of Babylon were already using floating point, in base 60, and writing clearer instructions on how to solve various problems than most of the programs of today. Sure, their programs were simpler.
A similar point is made in this article which I recommend everyone to read – it has way more ideas than this and there are a lot of fun, interesting statements there. We’ve been programming since times immemorial, and the only thing that changed is a pendulum on what is most popular now, what is the way we use to convert between ideas and implementations.
Thus, what can we do to ensure we go towards the best possible future or at least land in its vicinity? I’d argue that there are two main aspects to consider.
First, the UX of programming with AI. We’re starting to see various experiences. We have Copilot, Jules, and similar other give me a task and I’ll work on it while you get some coffee tools. We have the classical tools that just do auto-complete but in a smarter way. We have Antigravity as an editor pushing the boundary on what’s possible. There are even tools created specifically for vibe-coding, anything else be damned, I just want this to run once or twice. But there are still many gaps. I still want an experience built around Vim. Because I really don’t want to open another browser (via Electron) to do the same stuff I can do in Vim but with higher resource consumption (both RAM on my system and cost of prompts, models, etc.). Plus, Electron, npm, all of these platforms that the AI editors are using now, they all have security issues.
And this is the second aspect that we need to really focus on if we want to get to that utopia of programming with AI.
We need to make sure that the AI models we are using are exactly the models that the developers intended to train. Insider risk when a coworker is paid more by a nation state to just insert a malicious change in the model (or is otherwise convinced to act in this way) becomes really important. Models are minimally inspectable, unlike compiled code which can be decompiled, or source code which can be read. There was a period when loading models could also perform arbitrary code execution, but those serialization formats (like Pickle) are being phased out (Safetensors is a much better format). But backdoors can still be injected, via poisoning or changing the architecture of the model.
Fortunately, we have
model-signing which can
sign a model once it gets produced, allowing users to verify it. This is the
first step on
building tamper-proof ML artifacts
but we still need to go a long way to get
to a secure AI/ML supply chain.
Fortunately, the working groups (and in this paragraph I linked to both
OpenSSF and CoSAI – Coalition for Secure AI), are working on these. Expect
to see more discussions about maturity levels, recommendations on what, when
and how to sign and verify, integrations of model signing into more model
hubs, ML frameworks, ML pipelines. But, we also need to start building upon
model signing.
There are multiple SIGs under OpenSSF’s AI/ML Working Group. One that recently started is working on end to end model provenance. Another one is working on optimized hashing on GPUs adding support even for cases when the CPU that starts the inference job cannot be trusted to not tamper with the model. There is also an working group on MCP security, securing agentic workflows. This is becoming very relevant, now that both A2A and MCP are different projects under the Linux Foundation umbrella.
There are many other efforts of adding security to AI, maybe I should write more articles on these, with more details. But, I also wanted to touch on the other side: using AI to enhance security. While initially LLMs were used to send slop vuln reports, they quickly turned into a new type of analyzers. It was a nice result of AIxCC that these systems were developed (and you can see details on how curl was impacted). I am actually looking forward to the Cyber Reasoning Systems SIG of the OpenSSF AI/ML WG, where we are taking the AIxCC systems and build upon them to reach a world where AI can be used to help solve a big problem of Open Source: maintainers are overworked, stressed, and there are a lot of vulnerabilities. We should not drown them in vulnerability reports, but rather help them with easy to use systems that prove the vulnerability, show the patch, provide confidence that the patch does not break anything in the project. I’m really excited about this space, just like I’m excited about the model signing project and everything related to it.
To conclude, given than AI has changed the world, given that the pendulum is
now on the automate with less precise language side, we need to make sure
that what we do here is still safe. To quote the way I ended most of my talks
this year: we must ensure that the intelligent creations of today don’t
become the security nightmares of tomorrow
.
Comments:
There are 0 comments (add more):