Skip to main content

For centuries, novelists have dramatized technology’s impact on humanity. In Frankenstein, Mary Wollstonecraft Shelley said, “Nothing is so painful to the human mind as a great and sudden change.” In his 1952 debut novel Player Piano, Kurt Vonnegut was more pointed. “Those who live by electronics, die by electronics,” he wrote. The 20th-century novelist Jonathan Franzen even said he is “doubtful anyone with an internet connection … is writing good fiction.” 

The rest of humanity, it seems, has taken such apocalyptic warnings to heart. A survey by AAA found 91% of Americans either fear or are uncertain about self-driving cars. Gallup determined three-quarters of people believe artificial intelligence (AI) will eliminate more jobs than it creates.

Eric Alston

The problem? These fears are largely unfounded. Innovation comes with risk, but it also results in more rich, and fulfilling lives. Take the invention of the locomotive, as an example. 

When trains were first developed, their potential for speed terrified much of the public at the time. Many expressed concerns that speeds of that level would make passengers unable to breathe, or that they might be shaken unconscious due to the intense vibrations. 

However, trains have been one of the most significant and crucial transportation inventions in human history, and the fears about speed from the days of their early inception have long since abated (Japan’s Shinkansen, or “bullet train,” can travel an impressive 200 miles per hour and its speed is its selling-point).

Innovation aside, government policymakers try to manage risk by imposing top-down edicts and restrictions. To wit: local, state, and federal policymakers have begun to consider policies to harness emerging technologies like AI, digital assets, and large language models like ChatGPT. Our community of scholars are ensuring liberty-advancing research is part of these conversations. As part of that effort, on Aug. 19 the Institute for Humane Studies (IHS) will host a one-hour virtual discussion with Eric Alston, scholar in residence in finance at the University of Colorado Boulder and faculty director at the Leeds School of Business Hernando de Soto Capital Markets Program.

IHS sat down with Alston for an interview this summer. Here is what he had to say about government regulation of technology. 

Regulation does not prevent harm, it breeds it

A scholar in the field of digital governance, Alston has written about government regulation and private governance of digital assets, data trusts, encryption, privacy rights, and blockchain. He incorporates work from three fields: institutional and organizational economics, law and economics, and constitutional design. 

The biggest challenge facing society when it comes to digital governance is the growing belief among U.S. policymakers that, through regulation, they can prevent emerging technologies from having any ill effects. This notion will result in a dampening of innovation in the United States while illiberal countries like China continue unthwarted, Alston said. 

Regulation of emerging technologies is based on the misguided notion that policy can eliminate risk and mandate a certain result. “Sometimes that outcome is obtainable, but often [there are] many unintended consequences,” Alston said. “In other instances, that outcome is simply not obtainable and, notwithstanding that fact, we sort of thrash around through our administrative state trying to achieve things that are functionally impossible or prohibitively expensive.”

Adhering to classical liberal ideas is more likely to lead to advancements in health and prosperity, and these principles are just as relevant in the digital age. “Often the dilemmas we confront in coordinated digital environments are very similar to governance dilemmas we have struggled with for centuries, if not millennia,”  Alston argued. “So for me, my approach is one of saying, let’s start with what we know and apply it in novel context.”

Amy S. Bruckman

Alston has been part of IHS’ network since he was a graduate student at the University of Maryland. He said the opportunity to meet with scholars from around the world to explore how classical liberal ideas can be applied to digital governance has helped advance his own work and thinking. “I’m kind of on an island in my own building in the sense that … business schools don’t need more than one or two blockchain and digital currencies researchers,” he explained. “Creating a thoughtful space where there are optimistic perspectives about the disruptive benefits of new technologies [along with] opposing views is essential because that’s how we progress.” 

Allaying fear, reducing risk is possible without top-down edicts 

Addressing complex social, economic, and collective action problems facing modern societies requires leading-edge thinking and fresh application of the principles that drive human progress. IHS is building an academic network of high-impact scholars that spans generational, disciplinary, ideological, and professional divides and that will develop and advance ideas that support thriving, free societies and widespread human flourishing. 

When it comes to questions about digital governance, this network includes Alston, along with Amy Bruckman and Maria Minniti. Bruckman is a professor at the Georgia Institute of Technology examining how individuals are integrating technologies like AI into their daily lives; Minniti is at Syracuse University’s Whitman School of Management exploring freedom-oriented solutions surrounding AI. 

In an IHS virtual discussion with Alston earlier this year, Minniti explained how federal policymakers’ decision to hastily regulate civil drone technology resulted in the United States missing out on productive applications of the technology. Additionally, in The Handbook of Innovation and Regulation, Minniti argued the effect of government regulation on technology development is substantial and may erode living standards. 

Maria Minniti

Bruckman shares Alston and Minniti’s skepticism of heavy-handed technology regulation — and tries to take a bottom-up approach in her classroom that gives each student the power to decide how they leverage AI. She acknowledged that stressed students may use large language models to cut corners, but said those decisions do not mean regulators, professors, and parents should deny students access to technology.

“It’s not our job to take a new technology and tell people not to use it,” Bruckman said in an interview with IHS. “It’s our job to teach them to use it appropriately. So my new AI policy says if there are factual mistakes, I am increasing the penalty for each factual mistake. If there are weird stylistic problems, I am increasing the penalty for poor writing. I can’t tell students not to use these tools … but what I can do is to hold them responsible for the quality of what they hand in.”

If a principled approach like this one can compel students to think more prudently about their use of AI, can it also entice innovators to develop more thoughtful technologies? Bruckman seems to think so. Instead of lobbying to shape regulation, something Alston said tech firms are increasingly engaged in, innovators should focus on what they do best: developing creative solutions that respond to consumer wariness.   

“I would like, for example, to see corporations create a smart search engine that highlights how it knows everything it tells you,” Bruckman said. “Currently, a search result says, ‘Here’s the answer and don’t worry your pretty little head about where we got it from.’ I want the opposite. I want a search result that says, ‘Okay, this is what I think the answer is, and let me annotate this with every piece of evidence supporting where I got it from.’ That’s what we need.”

Like Bruckman, Alston suggested innovators have the power to allay the fears novelists, policymakers, and everyday users have about emerging technologies. “Private creation, private innovation, especially in new technological frontiers, is one of the most crucial components for social flourishing we have,” he said.

Join the Aug. 19 webinar with Eric Alston to discuss how a principles-based approach will help policymakers, tech firms, and scholars protect consumers from potential harm while allowing innovation to flourish. Click here to register.

Here is the timeline for our application process:

  1. Apply for a position 
  2. An HR team member will review your application submission  
  3. If selected for consideration, you will speak with a recruiter 
  4. If your experience and skills match the role, you will interview with the hiring manager
  5. If you are a potential fit for the position, you will interview with additional staff members
  6. If you are the candidate chosen, we will extend a job offer

 

All candidates will be notified regarding the status of their application within two to three weeks of submission. As new positions often become available, we encourage you to visit our site frequently for additional opportunities that align with your interests and skills.