Generative AI, ChatGPT, and intentsReading time: 9 minutes
It’s a cold, stormy Saturday morning in Berkeley – well, stormy by Berkeley standards at least – so it’s a perfect time for some hot coffee, reading in bed, exploring threads of thought on the web wherever they might lead. And, this time, also writing.
I think we’re seeing something truly big emerging in front of our eyes: generative AI. Actually, no, I think we’re seeing something truly big that I don’t have a name for and can’t quite describe, but generative AI is ushering in. You might know it in the form of the Dall-E AI-generated images popping up everywhere, or more recently by all the talk about ChatGPT and the truly mind-blowing examples of text it generated.
Don’t people like me in tech tend to say “truly big” every Monday, Wednesday and Friday? Yeah, maybe, but this is something truly “truly big”. How big? To me, it reminds me most of the emergence of the web. I say that not only as someone clearly showing his age, but because it really does give me that same feeling. Let me explain.
In 1991 I was researching various areas in theoretical particle physics as a young postdoc at Lawrence Berkeley Lab (now LBNL). I had a NeXT computer on my desk, and right next to the Gopher app icon, a new app called “WorldWideWeb” came out from “one of those experimentalists at CERN”, the big lab in Geneva. I spent a bit of time with it, asking myself how interested I was in the minutiae of high energy lab findings, and came to the appropriate conclusion: delete. Just like that, I had missed the next big thing. Many years later, when I had the honor and pleasure of meeting and chatting with Tim Berners-Lee about this moment, I shared this embarrassing truth, followed by a wish not to do that again. *
Big like the web
In that spirit, I want to explain why I think generative AI may be another such big thing. I don’t think many people understood back in the early ‘90s what was about to happen, even if they sensed its potential, and some lucky few turned that hunch into fame and fortune. It was just another way to write and disseminate information, and by the way, it also allowed the consumer of a web page to send some information back. But it was open to everyone to do with it what they pleased, at least after ‘93, and it was so easy to consume the information that it could appeal to a large audience, which would incentivize more usage, which would appeal to an even larger audience… And because it could take back information, the audience could participate… in what? Maybe in building more information. Or maybe in ordering stuff. Or maybe in making airline reservations. Or maybe in… well, we all know how the web transformed the world.
I feel the same sorts of forces in play now, because generative AI stands to unlock and orchestrate a whole bunch of other existing building blocks to usher in dramatic new transformations. I think the term “generative AI” is actually redundant: how can you call it artificial intelligence if it can’t generate anything new, whether that new thing is a picture or a poem or code or an inference? But it’s a really important step because the likes of Dall-E or ChatGPT or Google’s LaMDA set their sights on a great problem: generate something new that wasn’t there before, based on natural conversation with a human and a lot of knowledge from the web, continuously refined by more conversations and more knowledge. When they solved that, and opened it to the world, and let anyone use it, we have again something super easy to consume, with lots of ways to get information back, and lots of directions to take it. Deeply transformative directions, for better or worse. It’s starting to sound familiar…
You don’t often hear of a new tech being considered as “a Google replacement”. That highlights the degree to which people are realizing – much faster than in the ‘90s – the potential impact here. Of course it’s also not accurate: they’re thinking of replacing the search bar, but Google has long gone beyond a search bar: Google also has a knowledge graph of the world, unfortunately not a very open and shared one, and Google has long since incorporated natural speech into its search bar so you can ask questions and not just search. Nevertheless, the sentiment is right: people can now conceive and in fact try out an “ask me anything” system that can converse, can evolve the AMA, can learn right there in front of you. And, unlike Google (search), can then be told to do something with that conversation: Make a play out of this! Write my code for me! Do my homework! Plan my vacation! Create a low-sugar gluten-free recipe for blueberry scones! Pretend you’re <insert famous personality here> and…!
What will change?
Yes, there will be many nefarious outcomes, and indeed there will be entirely new digital assaults launched using these capabilities and maybe new digital defenses based on them. But I believe there will be tremendous new positive frontiers too. In any case, like it or not, the genie is out of the bottle. Now we have to ask: do we author documentation, or do we ask ChatGPT to author it for us? How about for other content: recipes, trip itineraries, medical analyses, product evaluations? And do we, as consumers, trust these more or less than in the past? If ChatGPT is fed a ton of information, who does the feeding? Who influences this ultimate influencer?
And who is even responsible? Sure, you could argue whoever publishes content is responsible. But we know that whoever publishes content delegates responsibility to the author, be it an intern or an outsourced writer or a medical researcher. That creative entity takes responsibility for their creative work, and generative AI is indeed creative. There are examples popping up everywhere showing where ChatGPT gets things wrong – inferring improperly that, because Mark Twain lived in San Francisco when Levi’s was forming, he worked for Levi’s. In the AI industry it’s called hallucination and is as much a feature as a bug. When you think about it, creativity often requires making mistakes; of course, if you’re good you learn to recognize mistakes, you have humility to question your own conclusions, you take time to fact-check. Maybe that’s the next version of generative AI. But in any case, we have to acknowledge there is a creative entity here, and it is not a human. Can it be responsible? Can it have good and bad intentions?
On intentions and intents
Which brings me to thinking about intentions.
First, for obvious reasons: if we know the intentions of the systems and entities behind what we consume, we can be better consumers. Indeed there is a progression, a graph, of intentions: if ChatGPT knew the intentions of the information it trains on, it would be able to present them as metadata on its output, which could be fed into whatever consumes its output. Wanna know who’s responsible for that yummy blueberry scone recipe? It was generated by an algorithm trained on recipes based on 64% honey industry content, 14% fruit industry content, and 12% diabetes research funding. Break out the napkins, please.
But also for more personal reasons. I’ve been fascinated by intentions and incentives for a long time. I even tried to monetize them, and failed. I once co-founded a startup called Accomplice that aimed to help people and teams with task management, a bit like Asana today. Rather than charging for it, we thought we could make money by selling ads that were actually useful: if we saw a task to “book tickets to Italy”, we would offer Google ads for Italian dictionaries, Italian hotels, and Italian excursions. We approached Google but at the time it couldn’t support that kind of model with its advertisers so it declined, as did our fortunes. Still, the idea stuck with me: if you know the intentions, you could do something useful with it.
Like what? Well, imagine if I really trusted my conversational bot, and started to reveal my actual intentions. I may not even do it intentionally (sorry, no pun intended), but rather through my conversations. Of course ads today are already targeted, but think of applying the power of massive AI and massive marketplaces to a truly deep, personalized understanding of each consumer: what could be achieved? I don’t think it’s happening today. For example, I seriously doubt Amazon had any knowledge that I was trying to build a way to inspect my sail boat’s propeller for corrosion when I bought all the components needed to do that. But what if my bot – my trusted assistant – understood what I was trying to do, understood I had my wife’s birthday to plan for, understood we’d need to travel soon to visit our son during his semester abroad… how could it help me, and how could it make commerce a lot more efficient and profitable? To be fair, I’m the opposite of an expert in AI, so for years I have been thinking about what the world could be like with trusted bots, negotiating bots, marketplaces of bots, without doing much more about it.
But I am doing something about intentions, and specifically, the atoms of intentions I call “intents”. Shameless plug: the company I co-founded, Otterize, asks developers to write down the intents of their code, specifically what their code intends to connect to, and automatically configures all the security mechanisms to allow those connections, while blocking any unintended connections. It’s not AI at all, and it’s a pretty narrow focus, but I am hoping it does check the truly-useful box. We eliminate all the pain of configuring access controls (ACLs, mTLS and certificates, Kubernetes network policies, AWS permissions, etc.) by doing it for the developer, automatically, based only on simple declaration of intents. The approach is called intent-based access control – IBAC. The core software is open-source, anyone can use and extend it, and developers will find it super-easy to consume. If you’ve read this far, you already know why I think that’s important.
But also, if you’ve read this far – and thank you! – you might be asking: Was this blog itself written by ChatGPT? My answer is no, but I can’t prove my answer, almost by definition. Maybe you suspect we co-wrote it, and again I can tell you we did not, without proof, only with a plausible explanation: I often think by writing, so if I’m not doing the writing, it defeats the purpose. But I might also turn the question back to you, dear reader: why, ultimately, should you care?
What do you think about generative AI and intentions/intents? Let’s chat somewhere, or ping me directly at [email protected].
* Indeed, it took me several more years to stop tinkering with particle physics theory and dive head-long into the new world the web was ushering in. I had been watching it from the side, and I could see that the seminal moment was when the internet, and the framework to use it, and the tools to expose and consume it, were made publicly available, first to the academic community (meh) and then to the commercial sector and the rest of the world in 1993. In 1995 I joined the physics department at the University of Notre Dame, and as expected of a young new faculty member, I was asked to do the onerous task of maintaining the web site. My thoughts went to: “Me maintain it?! You maintain it, I’ll write the software to allow self-maintenance”, and took a sharp left turn in my career. (I only realized that change 4 years later, but as you see, I’m consistently slow on the uptake).
Tomer GreenwaldApr 18, 2023
Access control done rightRead more
David G. SimmonsAug 14, 2023
- Network Policy
Bite-size Otterize: moving fast and (never) breaking things
Protecting Kubernetes services one-by-oneRead more
David G. Simmons & Ori ShoshanAugust 31, 2023
- Network Policy
AWS releases built-in network policy enforcement for AWS EKSRead more