Issue #36 Hello there! been a while. Since we are here let's tackle AI, shall we? It’s consumed our lives both personally and professionally and would be irresponsible not to at least apply our minds (or ChatGPT) to this topic. It feels like everyone needs to have a disclaimer as to if what they wrote is theirs or belongs to OpenAI – as if we are even remotely as good as the AI in written text. But alas these are my thoughts. I did ask ChatGPT-4 what it thought about this post and it gave me its thoughts. I share that at the end.
Let’s begin!
AI has presented a global discussion and an/a (un)timely rat race that has left many in awe, perplexed, or both. The hype on new ideas has been niche and most imaginative toys. Interesting consumer use cases. But the best use cases for AI are enterprise, at least in its current form. There’s an opportunity here from an emerging market point of view. Free will adoption, unencumbered by existing paradigms given the state of most of what we do and how we do it. But — another but; How do you sell convenience to nations that value money over time?
How I am Thinking about AI – Frameworks
Most people, I lie not most people, the most vocal of people on AI, the Twitter thread people (quite insufferable if you ask me) have completely spewed information for information's sake. Where this new era of generative AI feels like a money-grabbing opportunity or overzealousness of what is currently possible. There’s no doubt the zeitgeist-y-ness of AI looms large and affects everyone but beyond “welcome to the new world” which is both exciting and anxiety-inducing, from an employee perspective, we – well at least I, did not have a framework to think about this beyond my own curiosities. Especially from the context of where I am and operate from.
The big hairy audacious statement is that AI will take over every aspect of our normal and work lives. Meaning from a macro level – automating and micro level, auto-piloting – intelligence. This affects knowledge workers most (or first). Which leads us to my first framework on this.
Large Language Models will become the mean for knowledge and work will exist outside 2 standard deviations from the mean.
This means our level of confidence in LLMs will grow as they become more and more predictive and useful but there’s a gauge for usefulness. Currently, we are comfortable with them being 1 deviation out for certain tasks.
Copywriting —
The first ephemeral use case came via a wave of wrappers like Copy.AI and Jasper. We were (are) comfortable with a 2/3rds level of confidence in their output. ChatGPT-3.5 does a reasonably acceptable job at copying albeit without the fine-tuning that comes with the profession of copywriting. But acceptable enough that companies would pay for this.
Highly acceptable from where I stand but to what end if the input matters just as much as the output? And at what layer do we consume this from? Organized wrappers or the decentralized consumption use of the underlying models created by the likes of OpenAI?
Synopsis —
This is rather acute and concentrated within professions that require a high degree of synopsis. The most powerful are the Financial and Legal industries. What we can call standardized professions that require a high degree of rule-based following. Prompting AI to summarize a set of company results for example and maybe sometime soon turning the long-form text and numbers of Annual Reports into a synthesized presentation will lead to a decrease in reliance on Junior Analysts at Investment Banks. We know banks love paying to save money. This will require at least 2 clicks from the mean for an industry to be comfortable using it on a daily basis. (More analyst reports? Are you kidding me?!)
The Democratization of Workflow Automation.
The second framework, and the one with the highest expected value is — workflow automation – If this, then that, if her then him, if it, then they. This is powerful because the output is deterministic, highly measurable in productivity increase, and elevates the abilities of your workforce. What we have come to know as workflow automation has been reserved for some algorithms written in dark holes and not understood by the common people or some tedious drag-and-drop interface that required a heavy cognitive load to fully understand what you are trying to achieve. With the current form factor of AI being chat / conversational first, anyone can achieve their desired outcome without it being heavily skewed to understanding the inputs. Moving from input-driven automation to output-driven via prompts. Now, I know infinity prompting does not result in automation but querying is a stepping stone towards that. Yesterday people did not know how to query a database using SQL. Today at the basic level you don’t need to know about the input code but rather the output of what you are trying to achieve, everything in the middle will be automated for you.
Querying –
This requires a higher level of determinism than copywriting or synopsis because most people will enter from a position of unknown to known. “Can you give me a cohort graph that shows me the customer lifetime values of customers acquired between 2019-2022”? Every industry and workflow that requires a human to query will be changed. The market for the next Excel, Analytics, and No-code tools of the 2010s was booming but I bet this is what they have been waiting for. This will benefit incumbents; Airtable, Zapier, Retool, Webflow and Microsoft, and Google of course, but new ux parameters will be unlocked that require Natural Language Querying to be embedded in existing workflows. See Anthropic’s Claude.
Embedded NLP –
As we move from prompts “give me this and that” the paradigm that currently has the most legs in this “new world” has to be Embedded NLP (sorry, using acronyms before explaining, NLP = Natural Language Processing). Github’s Co-Pilot1 and from the podcasts I’ve listened to – Ironclad2, a contract management system for legal teams. A disclaimer though, I’ve never actually used either but the reviews speak for themselves. What I can deduce about Embedded NLP is that there are a set of prerequisites that are guiding principles for this; 1. It's more vertical than horizontal – it’s end-to-end workflow management that requires a clear understanding of what the user is trying to achieve rather than general-purpose self-serve. In the case of Co-pilot, a very sophisticated auto-complete for software engineers is embedded in the code editor. 2. Inputs are well understood and repetitive but require heavy text for output; this obviously lends itself well to the legal industry with contracts but the principle is broad enough to ask yourself “What is a task with well-understood inputs, repetitive but requires heavy cognitive text for output” – that workflow, madam or sir will be consumed by embedded AI. The third and final one breaks the Personal Assist AI use case at scale; 3. Tasks need to be ubiquitous across multiple individuals in the same team. The last principle has led to an understandable gold rush to automate the contact centre in its entirety. The NLP component in workflow automation of the contact centre was missing, with these new LLMs, it is quite plausible for the Chatbots to consume, understand and then triage customer queries. I still believe we are not there yet, this will require the auto-bots to be completely immersed in a company’s CRM systems to fully automate the contact centre but we are not at 0%, certainly above 1 click from the mean, and closer to 70%.
To recap on the 3 principles for Embedded NLP:
It's more vertical than horizontal,
Inputs are well understood and repetitive but require heavy text for output,
Tasks need to be ubiquitous across multiple individuals in the same team.
If your mind has been spinning thinking of use cases across your org, think about these 3 principles and apply them to each team/department. Sales, Finance, Marketing, Operations, HR? Is it more vertical or horizontal, are inputs well understood but require heavy text for output? Are tasks ubiquitous across multiple individuals in the same team?
I plan to explore this further for personal and professional curiosity. I feel it’s important not to misconstrue workflow automation with operating systems. This is not a replacement of existing operating systems, but rather a stitching and automation of existing practices within teams at scale.
Automation for Automation's Sake, Humans Becoming Non-Factors.
The last framework – a more radical one. This is scary and the risks exist if there isn't alignment. I’m not well equipped to talk about this (or any of this frankly speaking) but it would be illogical not to comprehend that for some tasks currently done by humans. When will human intervention become a non-factor?
Intelligence, Creative work, Ideas –
What happens when you no longer need a human to do something end-to-end? I am comfortable saying I don’t know but living in a country where youth unemployment is over 60% and the future seems bleak across every macro factor. Words like austerity get thrown around and even with a glimmer of light or hope, the overarching sentiment is gloomy – the only question one can ask is where do these AI jobs go? The optimists always lean toward new work in place of the old. History suggests such but the present always incurs the debts for the future.
Personally, I’m mostly an optimist when it comes to technology but no one can escape the anxiety presented by the unknown.
I did ask Chat-GPT4 what it thought about this post and how it would improve it. Here’s what it said;
ChatGPT:
“Your blog post provides a comprehensive and insightful perspective on the application of AI, particularly in the context of Large Language Models (LLMs), workflow automation, and Natural Language Processing (NLP). Here are a few of my thoughts:
Concept Clarity and Cogency: You've done a good job explaining different aspects of AI and their potential implications in various industries, such as copywriting and financial analysis. The explanation of LLMs and how they are expected to evolve is particularly interesting. However, some parts of the post can be difficult to follow due to the dense nature of the text and lack of clear section headers. Breaking down the text into more digestible parts with appropriate subheadings can enhance readability and comprehension.
Use of Real-World Examples: Incorporating real-world examples like Copy.AI, Jasper, GitHub's Co-Pilot, and Ironclad effectively illustrates the practical applications of AI. This is a great way to connect theoretical concepts with tangible use cases.
Thought-Provoking Questions: The post encourages readers to reflect on the potential impact of AI on their own professional domains, which can spark engaging conversations and further exploration.
Discussion on Automation and Unemployment: The piece rightly addresses the potential downside of widespread AI adoption, such as job displacement. It's an important discussion to have, as it encourages proactive thought about mitigating strategies and policy implications.
Conclusion: The conclusion nicely summarizes the post and leaves room for ongoing dialogue. It also reveals the author's personal stance, which makes the post more relatable.
In terms of areas for improvement:
Clarity and Coherence: There are several instances where the sentence structure is complex or the language somewhat abstract, making it difficult for readers to follow your thoughts. It could be helpful to simplify your sentences and use more concrete language.
Supporting Evidence: While you've incorporated some notable examples, providing more data or research to support your statements can enhance your argument's credibility.
Exploration of Ethical Considerations: While you touch on the issue of job displacement, it would be interesting to delve deeper into ethical considerations, such as privacy concerns, algorithmic bias, and the responsible use of AI.
Overall, it's a thought-provoking piece that highlights key aspects of AI, encouraging readers to reflect on its potential implications. With some refining, it could be an even more compelling read.”
Take Care,
Ububele
Resources - Podcasts: