AI is not going away anytime soon, so now is the time to set up guardrails and use it where it can be most helpful to humans, like wading through mounds and mounds of data. Most of the AI tools are decent at sifting through said data to retrieve and use the data it needs for the task at hand. So, what does AI mean for the future of human workers? Not everyone will be affected the same. However, as AI technologies continue to develop and become more efficient, it’s important that the human workforce adapt and acquire more skills so the human workers remain relevant. After all, human have something to offer, too.

“Sometimes it’s hard being superior to every single other person on the planet.” –Homelander, “The Boys,” Amazon Prime, 2019
According to every “expert” and cheerleader, AI is—or will be—a great tool for every country, every company, every individual on the planet. Imagine tools that are able to scour every bit of data that’s available and develop new/old, legal/illegal, right/wrong, smart/dumb/stupid, and recommend the right solution/approach to doing stuff. In many cases, it will even do it for you… BAM! It can potentially save every organization and individual time, money, and effort.
We have two “minor” questions: Where did all of that data come from? And, what are you going to do with all of the time, effort, money you were going to invest in doing whatever it was you needed an answer to/solution for when AI solved it?
Thomas Edison was said to have failed 1,000 times but was quoted as saying, “I will not say I failed 1,000 times; I will say that I found 1,000 ways that won’t work.” In other words, he created 1,000 points of data. As the Bolivian-American educator emphasized, “Life is not about how many times you fall down. It’s about how many times you get back up.” And as hockey great Wayne Gretzky noted, “You miss 100% of the shots you don’t take.”
We admit we hate failing or even coming up short on our own expectations, but as much as it sucks, we learn more from stuff when it goes awry than when it succeeds the first time out of the chute. But AI is here and gaining ground exponentially, so it’s important to understand it, know how to work with it, and how to benefit from it.
AI must be good because we’re already seeing examples of how organizations and people can benefit from it.
Take Sam Altman, CEO of OpenAI, the AI R&D company that released ChatGPT for the world to use. It enabled him to buy a McLaren and a limited production, plug-in multimillion-dollar Koenigsegg Regera.
It also enabled him to buy a $27 million mansion in San Francisco’s upscale Russian Hill neighborhood.
Okay, he probably didn’t use his ChatGPT to purchase the home because now he’s suing the developer, calling the house
Obviously, the data used to make the home purchase decision was bad, or as any techie will tell you, GIGO (garbage in, garbage out). You can understand why people are concerned/excited/not really sure about how AI will affect their company, their work, their jobs, their lives. In a recent Pew study regarding people’s concerns about AI, 37% were more concerned than excited, 18% were more excited than concerned, 45% were equally concerned and excited, and less than 1% had no response. So, depending on what you see, read, or hear, generative AI is the greatest thing since…. It will mark the end of mankind, or you just don’t know/care. The truth is we will determine our future.
Like a lot of “new breakthroughs,” AI has been around for a long time, going back to the ’50s when Alan Turing published Computing Machinery and Intelligence, and proposed the idea of machine intelligence. It’s been talked about, held up as both the new beginning and the beginning of the end, ever since. We found one of the most honest responses to the question to people’s concern about AI was during a 2019 CBS 60 Minutes segment. Scott Pelley had interviewed Kai-Fu Lee, AI pioneer and VC, and later asked a young lady her thoughts on AI’s use and influence. She responded, “I don’t really think about it.”
Today, it is real, and we bump into it all the time. During the recent governmental elections around the globe, it can be agathokakological (good and evil). As a result of the rush to develop and deploy AI solutions, the EU was the first to set legislation regarding the technology. To establish guide rails, the EU passed the DMA (Digital Markets Act) that allowed the levying of heavy fines (first-time offenders up to 10% of the organizations value on the first infringement). Initially, the act applied to the industry’s leading gatekeepers (Alphabet, Amazon, Apple, TikTok owner ByteDance, Meta, Microsoft), but organizations are constantly being added to the list, emphasizing that the EU is serious when it comes to “with power comes responsibility.”

Since their passage of DMA and GDPR (General Data Protection Regulation), similar regulations have been passed by more than 150 countries. Okay, not China, North Korea, Russia, Iran, and a few other countries, but they have different priorities, so privacy and data protection aren’t big deals as long as you don’t think about it much.
But companies and countries around the globe are investing heavily in the development of both AI and guardrails because the potential and challenges of AI are huge. China and the US lead in the investment to develop, introduce, and use generative AI technologies, they are far from alone. Everyone is rushing to gain a decided edge.

Ever since IBM’s Deep Blue defeated the reigning world chess champion in 1997 and Google’s DeepMind’s AlphaGo beat the Go world champion, Lee Sedol, in 2016, it has become apparent that AI had the potential to tackle and solve ultra-complex questions, issues, and problems.
Okay, so the power requirements of the Nvidia-enabled super, supercomputers will increase by a factor of 10 every few years, and that’s sort of a problem. It’s probably a small price to pay for a world that will have increased productivity, job satisfaction, and what AI experts call a redefinition of what it means to have a job. They must have AI to come up with that “redefinition.”
But before we begin rushing to automate routine tasks, enhanced decision-making, and redefined jobs, we have one small problem… data. Data comes from everywhere—websites, databases, devices, gawd knows where—but it’s usually just like the world around us: chaotic, disorganized. It’s a lot like life, and just remember, GIGO.
Fortunately, most of the AI tools are decent at wading through the morass of factual and false data in order to retrieve/use the data it needs to adjust to personal and business needs, technological advances, and do the required/requested task. We admit, we don’t know how AI works and how it comes up with the solutions or completes the requested tasks. But then, neither do most AI experts, and what goes on in those little black boxes bothers them… a little.
Last year, more than 1,000 tech researchers and leaders signed an open letter to the industry urging that they pause the development of advanced AI systems, citing their concern over the risks to society and humanity. They wrote, “Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an ‘AI summer’ in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt.”
Obviously, that isn’t happening, because if you don’t have a couple of AI slides in your quarterly report to Wall Street, you’ll hear about it. And that is as it should be, because AI will ultimately be the most important, disruptive technology advance since the industrial revolution. The problem is it will not deliver utopia, and to achieve even modest results, it’s going to be a messy and seemingly inequitable path.
To paraphrase what Steve Wozniak, co-founder of Apple, said: AI can deliver exceptional benefits for all of the people on the planet, but it also represents profound risks. We need to strike a balance between AI-assisted decision-making and human input. If we don’t, we’ll lose the ability to question everything as well as our most important capability, creativity, which is especially important in the M&E industry; critical-thinking ability; and good old intuition, that gut feeling as to whether the solution and work done is correct or the best that can be done.
Some of the AI solutions are already reaping big benefits in the content creation, production, and delivery industry. Adobe’s Firefly is a family of generative AI models that are showing tangible results and benefits in solving problems in video, document, and audio work. The same is true for Avid’s ADA video and sound editing. The tools aren’t replacing video/sound editors, colorists, VFX specialists, artists, compositors, and mixers, but they are taking over the tedious, time-consuming, perhaps boring parts of the workload that turns all the digital material into a film, show, or ad. The tools free them up to fine-tune a video story that people want to go to a theater or relax at home to really enjoy.

In addition, it gets them home at a decent hour! And speaking of a decent hour (poor segue, we know) was that thanks to streaming services beefing up and enhancing their front end. Netflix may not have been the first to give subscribers an intelligent opening screen that recognizes you, understands what you’ve viewed, and makes some recommendations of movies/shows you’ll probably want to watch based on the genre and projects you’ve watched in the past.
As with most of the intelligently designed systems (Apple TV, Amazon Prime, Disney+), the longer you use them, the more data they accumulate, the better their recommendations. We know they use our data for a lot of things, including how they profile and green-light projects, and that’s okay—a fair trade for not having to waste 15–20 minutes hunting for entertainment. The problem is that each one has its own content/data silos, and going back to the old pay-TV bundle approach probably won’t happen.
Yes, the wired and wireless services offer mini bundles, but no one has them all. Fortunately, there is a way to loosely pull them all together without paying for Amazon Fire, Apple TV, or Roku. Yes, your smart TV will also do it, but that’s more information than we want to share with the manufacturer.
The best streaming show/movie front ends we have tried and prefer are JustWatch and Reelgood. As with the individual services, the more data they accumulate, the better the recommendation engine works to offer up the project you’re specifically interested in (regardless of the service) or in making genre, film/show recommendations—and they’re free.

But one of the best implementations of AI is in localization. Regardless of where the project is shot/produced in the world, AI has enabled services to localize the show/movie for their approved target markets. Subtitles are the cheapest road for them to offer video stories from Africa, SEA, Europe, or the Middle East. Yes, some jobs were lost with the implementation of AI-enabled localization, but it also increased the importance of having highly skilled translators on staff to double-check the accuracy of the translation; as a result, the jobs have become increasingly important, and the quality/quantity of film/show viewing options have increased manyfold over.
In all three instances noted above, final input, approval, and decision-making have been done by humans, and that isn’t just important, it’s vital. As AI technologies continue to develop and become more efficient, the workforce will adapt and acquire more skills so they remain relevant.
Industry economists note that some of the firms that are already working/experimenting with generative AI have probably bought into the pitch that AI model and tool companies are focusing on replacing workers or augmenting them to minimize the need for more people in their workforces. However, business and industry economists emphasize that technology has never reduced net employment, but instead enable them to be more productive and profitable.
But it won’t affect everyone the same. As the technology continues to develop and becomes more efficient and more effective, organizations will need to help people adapt and develop new skills, so they and the organization are able to address the new challenges/opportunities. As Homelander said in The Boys, “Companies, they come and go, but talent is forever.”
The main roadblock will be that AI will make people more reliant on technology. People like things to be simple and easy. We have to guard against automating too much to relieve stress because, as painful as it is, stress is good.
Just remember, the future is unknown, and it hasn’t been written… yet.
LIKE WHAT YOU’RE READING? INTRODUCE US TO YOUR FRIENDS AND COLLEAGUES.