We’ll have to update legislative frameworks for an age of artificial intelligence

  • The large language models used by AI are probabilistic systems, which complicates guilt assignment for breaking laws made on deterministic assumptions. This calls for some legal leeway—except where severe harm may be caused.

Rahul Matthan
Published26 Nov 2024, 04:00 PM IST
AI is a probabilistic system but our legislative systems are not designed to deal with such systems.
AI is a probabilistic system but our legislative systems are not designed to deal with such systems.

Large language models (LLMs) work so well because they compress human knowledge. They are trained on massive data-sets and convert the words they scan into tokens. Then, by assigning weights to these tokens, they build vast neural networks that identify the most likely connections between them.

Using this system of organizing information, they generate responses to prompts—building them, word by word, to create sentences, paragraphs and even large documents by simply predicting the next most appropriate word.

Also read: We have a new ‘species’ of our own making: Artificial intelligence agents

We used to think that there had to be a limit to the extent to which LLMs could improve. Surely, there was a point beyond which the benefits of increasing the size of a neural network would be marginal at best. However, what we discovered was that there was a power-law relationship between an increase in the number of parameters of a neural network and its performance.

The larger the model, the better it performs across a wide range of tasks, often to the point of surpassing smaller, specialized models even in domains they were not specifically trained for. This is what is referred to as the scaling law thanks to which artificial intelligence (AI) systems have been able to generate extraordinary outputs that, in many instances, far exceed the capacity of human researchers.

But no matter how good AI is, it can never be perfect. It is, by definition, a probabilistic, non-deterministic system. As a result, its responses are not conclusive but just the most statistically likely answer. Moreover, no matter how much effort we put into reducing AI ‘hallucinations,’ we will never be able to eliminate them entirely. And I don’t think we should even try.

After all, the reason AI is so magical is because of it’s fundamentally probabilistic approach to building connections in a neural network. The more we constrain its performance, the more we will forgo the benefits that it currently delivers.

The trouble is that our legislative frameworks are not designed to deal with probabilistic systems like these. They are designed to be binary—to clearly demarcate zones of permissible action, so that anyone who operates outside those zones can be immediately held liable for those transgressions. This paradigm has served us well for centuries.

Much of our daily existence can be described in terms of a series of systematic actions, those that we perform in our factories or in the normal course of our commercial operations. When things are black-or- white, it is easy to define what is permissible and what is not. All that the person responsible for a given system needs to do in order to avoid being held liable is ensure that it only performs in a manner expressly permitted by law.

While this regulatory approach works in the context of deterministic systems, it simply does not make sense in the context of probabilistic systems. Where it is not possible to determine how an AI system will react in response to the prompts it is given, how do we ensure that the system as a whole complies with the binary dictates of traditional legal frameworks?

As discussed above, this is a feature, not a bug. The reason AI is so useful is precisely because of these unconventional connections. The more AI developers are made to use post-training and system prompts to constrain the outputs generated by AI, the more it will shackle what AI has to offer us. If we want to maximize the benefits that we can extract from AI, we will have to re-imagine the way we think about liability.

Also read: Mint Explainer: The OpenAI case and what’s at stake for AI and copyright in India

We first need to recognize that these systems can and will perform in ways that are contrary to existing laws. For one-off incidents, we need to give developers a pass—to ensure they are not punished for what is essentially a feature of the system. However, if the AI system consistently generates harmful outputs, we must notify the persons responsible for that system and give them the opportunity to alter the way the system performs.

If they fail to do so even after being notified, they should be held responsible for the consequences. This approach ensures that rather than being held liable for every transgression in the binary way that current law requires, they have some space to manoeuvre while still being obliged to rectify the system if it is fundamentally flawed.

While this is a radically different approach to liability, it is one that is better aligned with the probabilistic nature of AI systems. It balances the need to encourage innovation in the field of AI while also holding persons responsible for these systems liable when systemic failings occur.

There is, however, one category of harms that might call for a different approach. AI systems make available previously inaccessible information and explain it in ways that ensure that even those unskilled in the art can understand it. This means that potentially dangerous information is more easily available to those who may want to misuse it.

This is referred to as the Chemical, Biological, Radiological and Nuclear risks (‘CBRN risks’) of AI and AI could make it much easier for persons with criminal intent to engineer deadly toxins, deploy biological weapons and initiate nuclear attacks.

Also read: The regulation of AI is too important to be left only to technologists

If there is one category of risk that deserves a stricter liability approach, it is this. Happily, this is something that responsible AI developers are deeply cognizant of and are actively working to ensure.

 

Catch all the Business News, Market News, Breaking News Events and Latest News Updates on Live Mint. Download The Mint News App to get Daily Market Updates.

MoreLess
First Published:26 Nov 2024, 04:00 PM IST
Business NewsOpinionColumnsWe’ll have to update legislative frameworks for an age of artificial intelligence

Get Instant Loan up to ₹10 Lakh!

  • Employment Type

    Most Active Stocks

    Power Grid Corporation Of India share price

    338.70
    03:50 PM | 26 NOV 2024
    -4.15 (-1.21%)

    Adani Power share price

    437.75
    03:58 PM | 26 NOV 2024
    -9.1 (-2.04%)

    Bharat Electronics share price

    297.80
    03:54 PM | 26 NOV 2024
    5.35 (1.83%)

    GAIL India share price

    193.90
    03:54 PM | 26 NOV 2024
    -5.25 (-2.64%)
    More Active Stocks

    Market Snapshot

    • Top Gainers
    • Top Losers
    • 52 Week High

    Piramal Enterprises share price

    1,197.35
    03:47 PM | 26 NOV 2024
    89.55 (8.08%)

    Laurus Labs share price

    545.00
    03:29 PM | 26 NOV 2024
    12.85 (2.41%)

    Wipro share price

    589.05
    03:58 PM | 26 NOV 2024
    6.3 (1.08%)

    Federal Bank share price

    213.55
    03:51 PM | 26 NOV 2024
    0.55 (0.26%)
    More from 52 Week High

    Poly Medicure share price

    2,775.00
    03:29 PM | 26 NOV 2024
    -227.7 (-7.58%)

    Adani Green Energy share price

    899.40
    03:59 PM | 26 NOV 2024
    -68.25 (-7.05%)

    DCM Shriram share price

    1,160.00
    03:29 PM | 26 NOV 2024
    -67.3 (-5.48%)

    Fortis Healthcare share price

    664.60
    03:59 PM | 26 NOV 2024
    -36.15 (-5.16%)
    More from Top Losers

    Piramal Enterprises share price

    1,197.35
    03:47 PM | 26 NOV 2024
    89.55 (8.08%)

    Triveni Turbines share price

    824.30
    03:54 PM | 26 NOV 2024
    60.4 (7.91%)

    Capri Global Capital share price

    210.00
    03:29 PM | 26 NOV 2024
    15.35 (7.89%)

    Vodafone Idea share price

    7.53
    03:59 PM | 26 NOV 2024
    0.55 (7.88%)
    More from Top Gainers

    Recommended For You

      More Recommendations

      Gold Prices

      • 24K
      • 22K
      Bangalore
      78,555.00-1,090.00
      Chennai
      78,561.00-1,090.00
      Delhi
      78,713.00-1,090.00
      Kolkata
      78,565.00-1,090.00

      Fuel Price

      • Petrol
      • Diesel
      Bangalore
      102.92/L0.00
      Chennai
      100.90/L0.00
      Kolkata
      104.95/L0.00
      New Delhi
      94.77/L0.00

      Popular in Opinion

        HomeMarketsPremiumInstant LoanMint Shorts