Back PJ Onori’s blog

AI doesn’t have to be horrible

I like Dave Rupert. Never met the guy, but I like him nonetheless. I enjoyed his thoughts on AI and decided to write my own. I find writing useful to get my head in order. AI isn’t going away, therefore it’s only responsible to form a point of view. So unfortunately the world now has another AI blog post. I’m sorry.

I’m skeptical of AI. Less because of the tech and more because of those ushering it into the world. Behind the bluster and marketing-hallucinations is something that could provide value. But, unfortunately, it doesn’t appear headed in that direction.

I’ve been noodling with various AI tools off and on for a year now. Most of my experiences have not been what I’d consider extraordinary. But there are three use cases that I think have real potential.

Your own brain in a jar

I’ve been using Google’s NotebookLM for the past few weeks and feeding it all my public content. The results are vacillate between so-so and promising. Nonetheless, I must laud Google’s execution of AI with this tool. NotebookLM focuses on referencing existing data as opposed to conjuring up slop. I’ve seen more nuanced and sophisticated responses as the corpus of data increases. It’s becoming a useful utility to query my past thoughts/musings. Essential? Not by a long shot, but intriguing.

This could be a valuable utility in the corporate setting to train on team meeting transcripts. A team can rack up dozens of meeting hours a week. Having a team brain-in-a-jar has limitless applications. It can enable quick recall, act as a temporary proxy when people aren’t available to respond—just to name couple. It also has limitless ethical risks. I’m less enthusiastic of this application given its penchant for abuse.

But, I see potential as a sole user my own information, I see potential here. I plan to continue exploring this avenue.

Reductive AI could be incredibly helpful

I have a growing belief that speed is not a problem within tech. Improved tools and processes have made productivity higher than ever. Hell, design systems can generate 20%-40% efficiency gains on their own. I’m starting to wonder if all our progress in efficiency is, in turn, creating inefficiency. Now that things can be output so quickly, we’re struggling to keep track of all that extra stuff.

I don’t think we need technology’s help in making more stuff. Stuff deficit isn’t our problem. The last decade of tech has been a continual slide towards more. More features, more meetings, more messages, more documents. More everything. In my experience, the majority of that stuff is unorganized and/or unnecessary. Generative AI is dousing our existing dumpster fire with napalm.

Instead of using AI to make stuff, we should be using it to get rid of stuff. AI can help pull signal from all our noise. I see three main themes where AI could help here:

  1. Aggregation: It’s hard to keep track of what everyone is working on. The larger the company, the harder this becomes. AI could improve visibility through aggregating team activity and output. I dream for a way to see what every design team is up to without resorting to hours of Figma-stalking. But it’s not only a design problem. It’s universal and acts as a constant impediment.

  2. Organization: One thing I’ve been noodling on is the idea of corporate librarians. This imaginary role would collate company information for logical access. Our current reality of corporate wikis and document collections are a mess. If corportations won’t librarians, the second best option is to have AI take a crack at it. My hope is a well-trained AI could continually organize, index and prune company data. Maybe then finding that one HR doc wouldn’t be like embarking on an odyssey.

  3. Extraction: We have so many documents. So many notes. So many transcripts, emails, messages. Imagine simplifying the buffet of artifacts into more digestible tapas—with citations. I worry about hallucinations, but citations should blunt this concern. And any potential to reduce information clutter is worth a bit of risk.

Outputting average as an advantage

People write differently. Often very differently. There’s a slim handful of company employees that know its content/writing guidelines. Even less follow them.

One of the criticisms of AI is that it produces “average”. But average can be good. It’d be great if all company documents were written the same way. Same typeface, same styling, same indents. Same verbiage, same tone. Same everything. I will never let AI touch my personal writing, but I would understand and welcome it in a team environment.

There’s tremendous upside to removing the cognitive overhead of learning someone’s “written dialect”. There’s even greater upside to removing thousands of them.

None of the above feel generative

I understand the potential of AI—I’m not that much of a dummy. But the biggest AI hallucination is us thinking it’s something it isn’t. I remain stubbornly convinced that we’re steering this tech in the wrong direction. I think there’s a world where AI could provide true utility without much of the (perceived or real) SkyNet vibes. But those applications are far smaller in ambition and much more boring. So I’m not holding my breath on my ideas taking off.

We have made a mess of the internet—on many levels. We may have backed into a way to fix it with AI. Ironically, and also unsurprisingly, we’re choosing to double down on more of the same.

I’m hoping for the best and planning for the worst.