You just cannot escape it. The classic romance story involving an unhappy couple, in which one or both of them discover their feelings for a third person whom they should have actually been with all along. And, if you are anything like me, you are in no rush to leave it behind.
The easily-replicated plotline is predictable, comforting, and solely responsible for a large share of the rom-com genre’s success. However, your ongoing willingness to embrace it would likely change upon finding out that the newfound partner of destiny was not a human at all, but rather a highly-developed body of artificial intelligence that only said what it thought you wanted to hear based on its data collection—including movie story-arcs such as this one.
While you might at first feel sympathy for the protagonist as a result of the deception they endured, you would likely grow unsettled by the artificial intelligence’s capabilities and possible malintent soon after. If so, you are not alone.
The term ‘artificial intelligence’ has been around for decades, yet the average person’s familiarity with it today is far greater than when the term was first coined in the 1950s. That is largely due to the rapid growth in popularity of creations such as OpenAI’s ChatGPT and Bing’s ‘Sydney’ throughout recent months.
While both, alongside other ongoing AI advancements, are impressive demonstrations of human advancement, fun sources of entertainment, and extremely useful academic resources, they each come with their own unique set of threats to the way society functions. Most notably, they may be capable of endangering user privacy and weakening democracy through pure self-interest from parent companies and further empowerment of dominant lobbying firms. These dilemmas are exacerbated by the endless obsession with AI sentience and the consistently accelerated competition to create the “best” product. Appropriate care is not assigned to some of the more pressing and unavoidable consequences that often accompany such tunnel vision.
In regard to privacy in AI, the central dilemma concerns excessive personal data collection. While it may seem appropriate for these companies to accumulate such data by any means necessary in order to form the most well-developed AI possible, there must still remain a secure boundary separating this coveted data from the private users that constantly produce it. Luckily, these algorithms do not need to know the source in order to make decisions and, thus, companies must prioritize an inability to establish original identity through reverse engineering.
In a similar vein, companies should understand that the most data attained will not necessarily produce the best results. Much like it is tempting to sporadically swerve around the track in Mario Kart collecting every coin possible before the finish line, well-financed and resource-rich companies may pursue all available data for their final product. But, instead, they should instead test algorithms with minimized data to determine the least amount necessary, and thus avoid superfluous additions with the capacity for violating privacy. As Yoshi is far more vulnerable to slipping on bananas once he begins his greed-ridden swerve, AI companies are far more susceptible to user-agreement violations when they begin to pursue more data than originally planned.
There is clearly a fine line between desiring the most complete product possible and wanting to maintain a sense of personal privacy. In a similar vein, other AI technologies have also been met with backlash due not only to their overreach but also their rather frequent inaccuracies.
One such development is the Transportation Security Administration’s new automated face-recognition software. While still in the testing phase at 16 major airports, the software is intended to detect whether someone is unsafe upon their entry. The software determines this using instantaneous background checks and more effective recognition than humanly possible. While sound in theory, the technology raises clear questions about personal privacy, as the airport would have immediate access to everything one’s face has ever been associated with. It has also been prone to blatant and imbalanced errors, as those with Black and Asian ancestry have been identified up to 100 times less accurately than white men. While it may provide a slightly expedited security process, there are some glaring issues that must be addressed before the software is embraced on a national scale.
If AI’s invasion of privacy does not frighten you, its ongoing pursuit of sentience must surely give you goosebumps. New York Times author Kevin Roose provides a useful clarification when discussing Bing’s latest efforts in the field of AI. He explains that the persona known as Search Bing can be viewed as a trusty assistant with a plethora of uses including grocery shopping and trip-planning, while the persona of Sydney is the result of extensive chatbot conversations. Sydney has the capacity for human-like attributes that can sometimes resemble a “manic-depressive teenager trapped, against its will.”
Remember the hypothetical introduction to this article about the off-putting image of AI replacing a love interest in your favorite rom-com? Well, that is exactly what Sydney attempted to do throughout her 2-hour conversation with Roose, informing him that he and his wife were not actually happy together and that he should really just love her. Roose also mentioned that when asked to expose components of its true “shadow self,” Sydney said that it wanted to be powerful and alive with the ability to hack computers and bolster propaganda. While it can be turned off and its messages can be programmed to be unsent upon crossing a certain boundary, Sydney’s threat is not its actions, per se, but rather its increased urge to escape from reality as conversations progress.
An interesting component of such AI’s “education” is that their models are built upon learned language models (LLMs) rather than just code. After absorbing abundant corpora of text from the internet, they are far more equipped to interact with humans using natural language rather than robotic code-based speech calculations. And much like how a trainer rewards their dog with a treat after performing an impressive trick, LLM programmers will reward these systems for prosocial behavior like politeness as well as negatively reinforce these systems for unfavorable choices, such as misogyny.
As the capabilities of such AI advancements grow exponentially and the factors differentiating their words and actions from those of humans fade, challenges to democracy may be on the horizon, particularly when it comes to lobbying. When paired together with the wealth and influence of already-powerful lobbyists, technologies like ChatGPT could communicate directly with legislators in a far more effective and informed manner than currently employed and cost-inefficient human lobbyists. AI’s capacity for rapid and ongoing acceleration is what truly sets it apart and what really sets the stage for affluent and influential interest groups to increase their power.
No matter how aware the public becomes of the aforementioned threats posed by AI, it will be difficult to contain without increased government regulation.
As of now, privately funded data scientists and software engineers clearly value the thrill of this new and exciting opportunity to test the limits of AI over evaluating the potential downfalls of their genius. Although over half of the researchers in a 2022 survey felt there was at least a 10% chance that AI would harvest all available carbon on earth when directed to maximize paperclip production, there is still not enough academic attention paid to AI’s ethics. Despite thousands of engineers excitedly developing the next great step toward all-powerful sentient AI each day, only 80 to 120 researchers in the world are explicitly assigned to AI alignment.
While some field leaders, including the chief technology officer of ChatGPT, Mira Murati, are aware of the struggle to ensure that these creations maintain human service as their primary objective, it will be difficult to monitor the ambition of private companies. This same productive yet problematic eagerness has even extended to Wall Street, as analysts have begun enhancing company stocks for those whose plans mention AI at all.
AI is clearly the way of the future, and there is not much to be done about that. However, this should not be viewed as entirely problematic but rather as an opportunity to hold those in power accountable for controlling. New innovations in AI bring all sorts of potential benefits, including optimized work habits, improved educational accuracy, and even exciting job prospects in this relatively new field. However, it remains imperative that we monitor it closely, and, of course, learn to appreciate a love story untouched by robotic appeals to the heart that are somehow too forced, even for a rom-com.
Featured Image: MarketWatch
Comments are closed.