Generative AI for Insights: A Comprehensive Guide
“With great power comes great responsibility.” You don’t have to be a Marvel buff to recognise that quote, popularised by the Spider-Man franchise.
Topics
While the sentiment was originally in reference to superhuman speed, strength, agility, and resilience, it’s a helpful one to keep in mind when discussing the rise of generative AI.
While the technology itself isn’t new, ChatGPT launched it into the hands of millions, something that for many felt like gaining a superpower. But like all superpowers, what matters is what you use them for. Generative AI is no different. There is the potential for greatness, for good, and for evil.
Organisations now stand at a critical juncture to decide how they will use this technology. Ultimately, it’s about taking a balanced perspective—seeing the possibilities but also the risks and approaching both with an open mind.
A quick refresher on Generative AI
Generative AI refers to deep-learning algorithms that are able to produce new content based on data they’ve been trained on and a prompt. While traditional AI systems are made to recognise patterns and make predictions, Generative AI can create new content like text, code, audio, and images.
The technology behind Generative AI is called a large language model, a type of machine learning model that can perform various natural language processing tasks, like generating/classifying text, answering questions, and translating text.
How can Generative AI enhance insights?
The insights industry is no stranger to change. The tools and methodologies available to insights professionals have evolved rapidly over the past few decades.
At this stage, the extent and speed of the changes that increasingly accessible Generative AI will bring are something we can only speculate on. But there are certain foundations to have in place that will help insights teams figure out how to respond quickly as more information becomes available. Ultimately, it all comes back to asking the right questions, a skill in which insights professionals are experts.
Getting insights faster
One area we see a lot of potential is the summarisation of information. For example, companies have already been using Generative AI to create auto-summaries of individual reports, removing the need to write an original description for each report manually.
We also see the potential to develop this use case further with the ability to summarise large volumes of information to answer business questions quickly in an easy-to-consume format. This could look like typing a question into a search bar. The generative AI platform would then leverage the company’s internal knowledge to present a succinct answer that links to additional sources.
For insights managers, this would mean being able to answer simple questions more quickly, and it could also help handle much of the groundwork when digging into more complex problems.
Democratising your insights
Generative AI technology could also help broaden the flow of insights throughout an organisation. More specifically, key business stakeholders could easily access critical insights without needing to involve an insights manager directly. By removing barriers to access, generative AI could help support organisations on an insights democratisation journey.
It could also help to alleviate common concerns associated with insights democratisation, like business stakeholders asking the wrong questions. In this use case, business stakeholders without research backgrounds can be prompted to ask more relevant questions.
Tailored communication for the right audiences
Another opportunity that comes with generative AI is the ability to tailor communication to both internal and external audiences.
In an insights context, there are several potential applications. It could help make knowledge sharing more impactful by personalising insights and communications for various business stakeholders.
It could also be used to tailor briefs to research agencies to streamline the research process and minimise the back-and-forth involved.
What are the drawbacks of insights pros?
As you’re likely aware, there are also many risks associated with generative AI in its current state, particularly for insights professionals.
Trust
One fundamental risk associated with generative AI is that you can’t fully trust the information it gives you, primarily due to its reliance on prompts. Generative AI is statistical, not analytical, so it works by predicting the most likely information to say next. If you give it the wrong prompt, you’re still likely to get a highly convincing answer.
What becomes even trickier is how it can blend correct and incorrect information. In situations where million-dollar business decisions are being made, the information needs to be trustworthy.
It’s also worth noting that ChatGPT is only trained on information through the end of 2021, which means that it won’t consider current events and trends.
Additionally, many questions surrounding consumer behaviour are complex. While a question like “How did millennials living in the US respond to our most recent concept test?” might generate a clear-cut answer, deeper questions about human values or emotions often require a more nuanced perspective. Not all questions have a single right answer, and when aiming to synthesise large sets of research reports, key details could fall between the cracks.
Transparency
Another key risk to pay attention to is a lack of transparency regarding how algorithms are trained. For example, ChatGPT cannot always tell you where it got its answers from; even when it can, those sources might be impossible to verify or nonexistent.
And because AI algorithms, generative or otherwise, are trained by humans and existing information, they can be biased. This can lead to answers that are racist, sexist, or otherwise offensive. For organisations looking to challenge biases in their decision-making and create a better world for consumers, this would be an instance of generative AI making work less productive.
Security
Common use cases for ChatGPT involve generating emails, meeting agendas, or reports. But putting in the necessary details to generate those texts may leave sensitive company information at risk.
In fact, an analysis conducted by security firm Cyberhaven found that of 1.6 million knowledge workers across industries, 5.6% had tried ChatGPT at least once at work, and 2.3% had put confidential company data into ChatGPT. Companies like JP Morgan, Verizon, Accenture and Amazon have banned staff from using ChatGPT over security concerns. And just recently, Italy became the first Western country to ban ChatGPT while investigating privacy concerns, drawing attention from privacy regulators in other European countries.
For insights teams or anyone working with proprietary research and insights, it’s essential to be aware of the risks associated with inputting information into a tool like ChatGPT, and to stay up-to-date on both your organisation’s internal data security policies and the policies of providers like OpenAI.
What to do next?
Generative AI offers both intriguing opportunities and clear risks for businesses, and there is still a lot that is unknown.
Insights leaders have the opportunity to show both their teams and organisations what responsible experimentation looks like. We’ve entered a new era of critical thinking, something that insights professionals are well-practised in.
The path forward is to ask the right questions and maintain a healthy dose of scepticism without ignoring the future as it unfolds.
Make the tech your own
A good place to start is by seeing the areas where you’re naturally drawn to using these tools. Before investing in any solution, you want to ensure that generative AI will fit into your workflows.
Likewise, it’s a good idea to gauge how open you and your team are to incorporating these technologies. This will help you determine if there need to be more guardrails in place or, conversely, more encouragement to experiment responsibly.
Sketch out the inefficiencies in your workflows, and explore whether it’s something you could automate in whole or in part. These are likely areas where generative AI could offer your team a major productivity boost.
Chances are you don’t have time for endless experimentation. A good way to focus your exploration is to look at the top priorities for your team and your organisation and focus efforts where they will have the most impact.
Communication is key
Be sure to clearly outline the risks for your function and organisation, and don’t hesitate to get advice from relevant experts in tech or security. Once the main risks are defined, you can align on the risk level you’re willing to tolerate.
While minding the risks, also don’t be afraid to ask the good kind of “What if…?” questions. If you see opportunities, be brave enough to share them. Now is the time to voice them. Likewise, listen to other parts of the organisation to see what opportunities they’ve identified and what you can learn from them.
It’s our firm belief that the future of insights will still need to combine human expertise with powerful technology. The most powerful technology in the world will be useless if no one actually wants to use it.
Therefore the focus for brands should be on responsible experimentation, finding the right problems to solve with the right tools, and not simply implementing technology for the sake of it. With great power comes great responsibility. Now is the time for brands to decide how they will use it.