The Writers Strike Part III - Generative AI

Few problems in organizations are simple and straightforward.  The Hollywood Writers Strike is no different.  Previously, we covered how human interdependence can be described with psychological tools and used to explain why unions are formed.1  Subsequently, we dove into breakdowns that exist at the negotiating table between executives and union members due to the human cognitive biases that each group subconsciously carries.2  Compensation has always been a key piece of previous strikes in Hollywood.  But, as we close in on the end of the seventh week of the strike, we will close out our review of the strike covering the very new and non-human problem that is also central to the current debate.

Generative AI is a type of artificial intelligence capability built to create content.  Different tools have been designed for different purposes.  Some, like OpenAI’s ChatGPT or Google’s Bard, handle text creation with use cases ranging from answering questions about a PDF file to writing a full blog post (not this one, mind you).  Others, like Midjourney or OpenAI’s DALL-E, are used for creating images off of user-provided prompts.  While machine learning techniques have been developing consistently over many decades, the generative and seemingly creative nature of these emerging tools is prompting a lot of new questions.  

In the case of writers, the primary question is “will we still continue to have jobs?”  This existential concern sits at the heart of the strike in Hollywood, but can also be heard echoing around newsrooms across the world.  In generative AI, writers see a low-quality but high-producing rote alternative to the truly novel and creative storylines they are able to produce.  Executives see an opportunity to remove cost from their system while simultaneously increasing productivity.  Ideally, the upcoming work evolution brought on by AI should lead to greater job availability and we will discuss why the best organizations should not be looking to reduce headcount even as AI technology improves.  However, that conclusion is not guaranteed and we will also look at a set of cognitive biases that puts that optimal outcome at risk.

What Would Be Optimal

It may sound surprising, but AI is more likely to create jobs than destroy them.  If we compare against the industrial revolution, certain specialties - such as an automobile engine maker - saw the quantity of jobs diminished.  However, many would become specialists at operating machinery or designers for new machinery and processes.  Technological advances lowered the cost of goods, decreasing prices and driving greater product demand.  The change to work also created entirely new supplemental job roles beyond those that had been put at risk in the first place.

The research is very clear, when productivity grows, so do jobs.  Since 1960 in the United States, productivity in a year is paired with job growth in that same year 79% of the time.  When viewed on a five-year period, that rate jumps to 95%.3   We have been seeing this exact trend play out recently in the digital advertising industry.  Nearly 75% of these companies began leveraging marketing technology automation in the past 10 years, and studies show that 63% of those who use automation outperform their competitors.4  This is not surprising because the digital space in particular is well set-up for rapid iteration and adoption of computing solutions.  As a result, marketing campaign efficiency of this industry has grown dramatically and in recent years the digital advertising space has grown at an average of nearly 25% each year as a result.4  Jobs have followed suit, adding 20% to the labor force with each passing year.4  With the advancement of machine learning, advertisers have been able to create more effective ad campaigns and drive higher engagement rates, which in turn requires more personnel to run ever-expanding operations.

When we look at AI, it is an ultimate productivity enabler.  For those concerned about losing their job to the new technology, this should be encouraging.  The more likely outcome is that human capabilities become augmented by AI, rather than replaced by it.  Like any tool, AI has its limitations.  Proper training of the tool is necessary and ongoing supervision will be required, particularly given that most training sets feeding these tools hold some degree of bias.  If nothing else, this should largely guarantee jobs as part of a human-in-the-loop framework.  With a paired approach of human operator and computer executor, the most optimal results are going to be achieved.

Why We May Not Get Optimal

Ultimately, organizational staffing decisions will be made by executives, not by writers.  That decision will focus around the cost-benefit trade-offs of reducing operating cost versus unlocking revenue growth potential.  As stated so well by James LaPlaine - former CIO at AOL and CTO at Red Ventures - “No company today has enough staff to do all of the work they have identified, whether it's a startup or a massive enterprise.”5  Yet, there is a fairly common push to see generative AI tools as a means to deliver work at parity of our teams today with a lower overall wage cost.  In isolation, this is implausible as these tools have a number of flaws and quality gaps that prevent them from truly delivering on the hyped-up value.  That is not to say we will never see that vision realized, but jumping to that stage now seems premature.  The executive rush to that future state comes from a number of biases that we see play out in the evaluation process.

Pro-Innovation or Automation Bias

This bias describes how people tend to overvalue the benefits of new technologies and undervalue the costs or risks that come with it.7  These costs could include the wages of those needed to engineer and maintain digital technology solutions.  Risks could come in the form of new edge cases that are not caught by existing quality assurance processes.  Automation bias describes the tendency to favor suggestions of automated processes and ignore contradictory evidence.7  In the case of generative AI, this can lead executive evaluators to potentially miss or overlook quality concerns from the tool’s output.  

In the case of the writers strike, this bias is clearly at play as studios seek to use the new technology before it is ready.  This is an unexpected decision given the generally sizable budgets available in Hollywood and the monetary risk to studios of producing a film that lacks intrigue.  Maintaining the status quo would actually be far more likely to be successful in the short to medium term.

Illusion of Explanatory Depth

This bias describes our individual belief that we know more about a subject than we actually do.9  The result of this is that we often make critical decisions with less information than what would be useful because we feel that we know enough as is.  As we step into an AI-enabled world, this is a dangerous bias.  These tools are deeply complex and far-reaching in their impacts.  While many of us can explain how we use ChatGPT, can we describe mathematically how setting a different “temperature” on the tool leads to more or less creative responses?  Can we articulate the ecological impacts from emissions of developing the initial model and now using it to help write an email to our boss?  Surely few of us can clearly explain the inevitable GPT-driven diminishment to our cognitive generative capabilities, similar to the parallel memory-loss described by the Google effect.10  Particularly in the early days, avoiding partially-informed decisions should be a paramount priority.  

Few executives in Hollywood likely have a complete grasp on how the introduction of AI will shake their industry.  However, many likely feel well enough informed to have an opinion on how that implementation should be delivered.  I expect that we will see the impacts of this bias be particularly strongly felt because of the ease-of-access to this advanced technology that is made possible by the natural language capabilities of ChatGPT, Bard, Midjourney, and others.  The technical table stakes needed to take advantage of these tools have never been lower.  The resulting overconfidence may never be higher.

Domain Neglect Bias

This bias describes the tendency to neglect relevant domain knowledge while solving inter-disciplinary problems.As noted above, individuals tend to overestimate their knowledge of a problem or functional area.  Worse still, when solving complex problems, this can lead to a situation where the expertise of subject-matter experts is not sought out as we believe that we have a good enough idea of the key considerations.  In the case of the writers strike, executives likely have an incomplete understanding of their writers expertise and the work of crafting an award-winning and box-office-smashing storyline.  When these experts are removed from the technology design conversation, worse results are realized.  

In the specific case of Hollywood, writers should not be excluded from the conversation on techniques for adopting generative AI - even if the intention is to use the tools not to replace, but to augment, writer capacity.  The writers will have the best sense of what would be actually useful in their daily activity.  Involving them from the start will lead to the best outcomes.  

A fair counter would be that writers are disincentivized to adopt these tools and would be counterproductive to the design conversation.  I do not believe this would be true if executives had established a culture of psychological safety and support.  In numerous places, we have seen creators rush to adopt technology - photoshop and photo editing in photography, computer-programmed CNC routers in woodworking and machining, and code auto-complete in engineering.  This case of writers being an outlier is more likely explained via interdependence than anything intrinsic about the profession.

To me, this is a critical distinction to draw and a key point for leaders to understand.  Executives are not experts.  In most cases they have not spent the time to develop a deep, technical skillset in the area of concern.  And in the remaining cases, they have been too far removed from the frontlines for too long.  When an organization keeps its experts away from key decision-making moments, the resulting decisions are incomplete.

Conclusion

It is unclear how long the writers strike will last.  It is also unlikely that Hollywood is the only industry that will experience significant labor upheaval over concerns revolving around generative AI capabilities.  With the early indications that organizations could look to use AI advancements as a way to cut out jobs, pushback from employees is pretty reasonable.  But beyond individual interests in maintaining employment, there are other reasons to challenge whether a cost-cutting approach is the right one.  Beyond the fact that such an outcome would buck all historic trends, we have to think that any such decisions would be largely based in short-term economic thinking and bias.  

But even more, using the potential of these tools as just an efficiency win is simply uninspiring.  While Large Language Models (LLMs) like ChatGPT are meaningfully flawed and in need of development, the capabilities of machine learning tools at large are incredible.  For example, DeepMind’s AlphaFold is able to predict protein folds from its amino acid string in a matter of hours - a task that previously could have taken decades.11  That is stunning.  The abilities of these tools will have an endless number of important effects on the world and how we work.  However, it will not be the role of leaders to predict and respond to those potential permutations.  It will be the experts, those closest to the current problems, who will have the best chance to find a path to advancement.  The role of the leaders will be to chart an overriding goal, ensure that everyone is abiding by the regulations of fair play, and to provide robust training and support to enable an environment that supports experimentation. 

References

  1. Patrick McKendry
  2. Patrick McKendry
  3. What can history teach us about technology and jobs? | McKinsey
  4. 60+ Compelling Digital Marketing Industry Statistics [2023]: How To Market Digitally In The U.S. - Zippia
  5. Generative AI & Headcount (#93) (paradoxpairs.com)
  6. Pro-innovation bias - Wikipedia
  7. Automation bias - Wikipedia
  8. List of cognitive biases - Wikipedia
  9. The Illusion of Explanatory Depth - The Decision Lab
  10. Google effect - Wikipedia
  11. AI system solves 50-year-old protein folding problem in hours | Live Science

Struggling with a personal development challenge?  Looking for management insights on a certain topic?
Share your work-related questions and dilemmas with us for upcoming blog post consideration.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
← View all