On June 19th, 2024, Nvidia Corporation overtook Microsoft for a day as the most valuable company in the world, with a market capitalization of $3.34 trillion. This tremendous feat marks only the latest of Nvidia’s figures regarding its skyrocketing valuation, which is undeniably attributed to the company’s position as a leader in AI data center chips and AI-powered software for audio and video processing. A supplier for major AI investments in tech giants such as Microsoft, Alphabet, and Meta, Nvidia’s success signals the much greater idea of AI integration, infiltration, and immeasurable impact in a wide array of industries and fields. In the midst of the AI boom, another company became globally known as a spearhead of generative AI: OpenAI. Generative AI is, broadly speaking, artificial intelligence able to generate text, images, video, audio, and other data using generative models in response to prompts. While the technical aspects of transformer-based deep neural networks are of no concern to the average person, the applications of generating a text file or image out of a single sentence prompt seem endless, and it seemed many shared this sentiment, as OpenAI’s ChatGPT became the fastest-growing consumer software application of January 2023 with over 100 million users.
Along with ChatGPT, generative AI products such as the text-to-image DALL-E and text-to-video Sora have been operated by users for optimized workflow, entertainment, and art, among other things. However, the quickly developing AI landscape presents a few notable concerns. Issues of regulation on AI development and use, job displacement, discrimination bias, and cybercrime are salient problems within the rising AI wave, and how we deal with them determines the impact and ultimate future of AI.
From a regulatory lens, the rapid pace at which AI is developing strikes at the Achilles’ heel of many legislative systems, especially polarized democracies as many western nations are. Copyright is largely complicated by generative AI, both in the use of copyrighted material being trained for AI, and of copyright protection for AI-created works. A large group of AI developers claim that using datasets with copyrighted works for training their AI is protected under fair use, while copyright holders argue it infringes on their rights. These arguments are put into further scrutiny when, for instance, a generative AI creates a portrait that is a near replication of an artist’s, just with a few more fingers on the subject’s hands. Additionally, granting copyright towards individuals using generative AI may unjustly give ownership of an item lacking human authorship.
Another large concern is of job losses and displacement. While technological unemployment is not a new concept, the advent of AI has placed additional pressure on
many creative and qualitative based employments. In April 2023, it was reported the AI boom removing about 70% of game illustrator jobs in China. Jobs from writing to voice acting now compete against AI in many areas of the world. The difference in cost and production speed creates a draw towards AI for companies, at the expense of potentially widespread unemployment and quality and authenticity of work.
Among the extensive list of other worries are discrimination bias in AI and cybercrime. A pitfall of generative AI is that it reflects the dataset it is trained on, and so in the cases of AI trained on certain communities, the items it produces reflect any biases in the group, causing racist text or heavily stereotyped images. In an effort by AI companies to combat this bias, AI can make some off-center results, as demonstrated by Google Gemini depicting the US Founding Fathers as people of colour.
The negative impact for AI can carry on to criminal acts: The ability generative AI carries to create realistic fake content has been maliciously used for phishing, disinformation, and fraud. Deepfaked video and audio can be held against an individual for actions they did not commit, and its existence makes it increasingly difficult for legal authorities to rely on footage as they have for decades. Additionally, hackers have found weaknesses in text-based AI that enable them to extract personal, harmful information behind safeguards. As AI grows in scale and control over our lives, its is increasingly imminent that necessary measures are taken by governments to mitigate these threats.
In the end, AI is currently only in its infant stages, and it still harbors “enormous potential for good and evil at scale,” as succinctly stated by UN Secretary-General António Guterres. As our technologies continue to advance, it becomes the duty and concern of the global population to consider the dangers of our winnings as much as their advantages. Social activism and regulation on these large companies like Nvidia and OpenAI may be the next steps towards a more secure future. AI as a technology will not go away any time soon, so the onus is on us to consider the usage of AI in our lives.