MacResource
"Why some tech leaders are so worried about a California AI safety bill." - Printable Version

+- MacResource (https://forums.macresource.com)
+-- Forum: My Category (https://forums.macresource.com/forumdisplay.php?fid=1)
+--- Forum: 'Friendly' Political Ranting (https://forums.macresource.com/forumdisplay.php?fid=6)
+--- Thread: "Why some tech leaders are so worried about a California AI safety bill." (/showthread.php?tid=287948)

Pages: 1 2


"Why some tech leaders are so worried about a California AI safety bill." - Ted King - 06-16-2024

If I build a car that is far more dangerous than other cars, don’t do any safety testing, release it, and it ultimately leads to people getting killed, I will probably be held liable and have to pay damages, if not criminal penalties.

If I build a search engine that (unlike Google) has as the first result for “how can I commit a mass murder” detailed instructions on how best to carry out a spree killing, and someone uses my search engine and follows the instructions, I likely won’t be held liable, thanks largely to Section 230 of the Communications Decency Act of 1996.

So here’s a question: Is an AI assistant more like a car, where we can expect manufacturers to do safety testing or be liable if they get people killed? Or is it more like a search engine?

This is one of the questions animating the current raging discourse in tech over California’s SB 1047, legislation in the works that mandates that companies that spend more than $100 million on training a “frontier model” in AI — like the in-progress GPT-5 — do safety testing. Otherwise, they would be liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents.

The general concept that AI developers should be liable for the harms of the technology they are creating is overwhelmingly popular with the American public, and an earlier version of the bill — which was much more stringent — passed the California state senate 32-1. It has endorsements from Geoffrey Hinton and Yoshua Bengio, two of the most-cited AI researchers in the world.
Would it destroy the AI industry to hold it liable?

Criticism of the bill from much of the tech world, though, has been fierce. [duh]

“Regulating basic technology will put an end to innovation,” Meta’s chief AI scientist, Yann LeCun, wrote in an X post denouncing 1047. He shared other posts declaring that “it's likely to destroy California’s fantastic history of technological innovation” and wondered aloud, “Does SB-1047, up for a vote by the California Assembly, spell the end of the Californian technology industry?” The CEO of HuggingFace, a leader in the AI open source community, called the bill a “huge blow to both CA and US innovation.”

The author follows that up with maybe a somewhat biased but generally fair overview of the issues and ramifications.

This seems like a big deal to me.


Re: "Why some tech leaders are so worried about a California AI safety bill." - Tiangou - 06-16-2024

The distinction between a Google search result and a large language model generative AI result is that Google's algorithm simply prioritizes the order of the results of a search with sites that contain relevant terms, where you can then poke through and find what you believe to be most relevant; meanwhile, an AI grabs pieces of different things from its database and presents the result as an authoritative answer for you, with little or no context for you to look into the sources and use your own judgment.

And much of the time, that answer from AI is not at all authoritative and either takes from sources that provide misinformation or it "hallucinates" a result that is completely and potentially disastrously wrong.

This is a "Hell YES there's liability!" situation.


Re: "Why some tech leaders are so worried about a California AI safety bill." Long response - Diana - 06-16-2024

First, I haven't searched for anything regarding this on the web. The following are just my thoughts.

Secondly, I deal with models on a daily basis. They are neither inherently good, nor bad, but a way that we put together information that may yield a desired result. Are they biased? Hell yes. I don't think we can make them completely non biased, but it's worth trying.

So, here goes.

The nature of the AI assistant is more like a search engine rather than a car. But these are two very different technologies.

It doesn't matter how much computer stuff you throw into the car, a "car" is a mechanical item that produces a predictable response to the given stimuli. For instance, you press on the accelerator and the car goes faster, or the engine revs up; you press the brake and the car slows, or stops. The exact method of how these events happen is not that relevant to this discussion: a car is a mechanical item wherein there is very little or no influence of the chaotic nature of the rest of the world. (Before you bristle at the "chaotic nature" statement, read on.) The only chaotic things that happen with a car are due to the nature of the failure of a part, or in the operating of such car (the idiot behind the wheel). Parts that fail need to be replaced--this is called maintenance. Thus, if the car ultimately leads to someone getting killed, it is something that is often litigated, someone gets sued, someone will pay.

A basic computer is also a mechanical item: very predictable in its response. Turn on this particular chip and something happens; turn it off and another thing happens. If it fails the computer dies, and it often doesn't kill people. If it were to explode, and kills someone that way, perhaps someone would litigate and perhaps someone would have to pay. But the basic computer is just an object, a tool to be used.

Now, the nature of the computer is that is needs to be programmed (yeah, I know you know this stuff). The programming can be very discrete in its nature, or not so much. A discrete natured problem can be determined on the computer without a lot of fuss, even if it takes a lot of time; it takes no guesses and it takes no assumptions. However, a non discrete problem is a completely different animal in that it DOES take guesses and it DOES take assumptions to be able to solve the problem; often it takes multiple times through the problem from multiple different directions, and the answer the computer determines is a convergence of these attempts where it subsequent attempt has a much lower difference from another attempt: it finds a minimum in differences in the answer, it converges to the answer. Is this the true minimum, or just some minimum it found (a local minimum) on its way to determine the true minimum? It all depends on the number of times it tries to find the "answer" and whether or not it finds one that fits the minimization better. So, where does it start and how does it determine a starting point? Here's where the chaotic nature of the universe steals itself into it: the program starts with a value as determined usually by a random number generator of some type (which aren't truly random, as some will tell you). The answer we get depends on the starting point and the assumptions we make. If the assumptions are wrong, the answer is most likely wrong as well.

AI, Artificial Intelligence, is just a computer program. Hell, the definition of intelligence changes from day to day! Intelligence is NOT a discrete problem! It has to start with something, it has to use the assumptions we give it. In addition, it is a MODEL of the way real intelligence works.

There are two types of models: supervised and unsupervised. Supervised models are those that we give the parameters to the model and it finds the answers for us. They are strict, and they require human supervision to work: we give it data and it uses that data as we have labelled it, never going outside of its "lane." Thus, when it gives an answer, we are assured that the answer derives from the data we have given it; if we have mislabelled something, or miscategorized something, it is due to the nature of the data WE GAVE IT. Unsupervised, on the other hand, also uses the data we have given it but it finds correlations and relationships within the data that we have not specifically told it, hence the "unsupervised" part. We may very well be unaware of the correlations and relationships, or these correlations and relationships may be spurious. But we didn't knowingly give it those correlations and relationships, it "learned" them on its own. Hence, if the data contains biases then the model will be biased.

The kicker is that AI is an unsupervised model. And if the model isn't built to provide context, it won't, which makes it difficult to verify the answer it gave, depending on the complexity of the problem you posed. The nature of the hidden biases in the data may make it nearly impossible for you to determine why the answer is wrong.

Is it a "true" model? That all depends on how you want to evaluate it. This leads me to some questions, not necessarily in the order presented:

1. Is this model useful? How?
2. What are the limitations of the model? Do we acknowledge those limitations? And will we stay within them?
3. Do we really want to add the model to something that can potentially be life threatening?
4. Is the model sufficiently mature so we can depend on it in a potentially life threatening situation?
5. Does this model actually give moral answers to immoral questions it may be posed? Now you are getting outside of the parameters that the model can handle and as the model isn't moral or immoral, I don't think it can make the call. See question 2 above.

In short:

AI is not to the point of being able to discern what the user may want the information for. It is not able to make a judgement as to whether or not the question is legal. It just provides answers based on information it is given. And the answers it gives may or may not make sense, either. It cobbles together things it "finds" in the databases it uses that appear to be connected in some manner. Thus, if you decide to use the information that this model gives you and that information is wrong, or the AI model hallucinated and gave a nonsense answer (i.e., pulled information out of it's collective backside) and you went with it, if your actions result in something bad, then the liability should rest on you for your actions. If AI is being inserted into society (as it appears to be) with the idea that it can and will make our lives easier and better, and we find that it isn't, then yes, the people developing the AI model should be held liable for putting out a product that wasn't ready to be released yet.

AI is not authoritative. It cannot judge the information given to it, nor can it judge the results it may spew out. I don't think it can do the things that people want it to do, not fully and certainly not reliably in its current iteration. We need to continue looking at it, researching it, investigating it, giving it situations and finding out what it will do or can do or where the bias is in the system that we inadvertently built into it. I can see situations that may require AI, but that is not now. AI isn't ready.

AI is a tool, and in the hands of a human, any tool can be used for disastrous results. Understanding AI is also a tool, and as such we need to understand this model we have built or it can and most likely will be used for disastrous results.


Re: "Why some tech leaders are so worried about a California AI safety bill." - RgrF - 06-16-2024

Reaction to and restrictions introduced in response may pose the greater threat than AI. Fear always leads the pack.


Re: "Why some tech leaders are so worried about a California AI safety bill." Long response - Harbourmaster - 06-17-2024

Diana wrote:
First, I haven't searched for anything regarding this on the web. The following are just my thoughts.

Secondly, I deal with models on a daily basis. They are neither inherently good, nor bad, but a way that we put together information that may yield a desired result. Are they biased? Hell yes. I don't think we can make them completely non biased, but it's worth trying.

So, here goes.

The nature of the AI assistant is more like a search engine rather than a car. But these are two very different technologies.

It doesn't matter how much computer stuff you throw into the car, a "car" is a mechanical item that produces a predictable response to the given stimuli. For instance, you press on the accelerator and the car goes faster, or the engine revs up; you press the brake and the car slows, or stops. The exact method of how these events happen is not that relevant to this discussion: a car is a mechanical item wherein there is very little or no influence of the chaotic nature of the rest of the world. (Before you bristle at the "chaotic nature" statement, read on.) The only chaotic things that happen with a car are due to the nature of the failure of a part, or in the operating of such car (the idiot behind the wheel). Parts that fail need to be replaced--this is called maintenance. Thus, if the car ultimately leads to someone getting killed, it is something that is often litigated, someone gets sued, someone will pay.

A basic computer is also a mechanical item: very predictable in its response. Turn on this particular chip and something happens; turn it off and another thing happens. If it fails the computer dies, and it often doesn't kill people. If it were to explode, and kills someone that way, perhaps someone would litigate and perhaps someone would have to pay. But the basic computer is just an object, a tool to be used.

Now, the nature of the computer is that is needs to be programmed (yeah, I know you know this stuff). The programming can be very discrete in its nature, or not so much. A discrete natured problem can be determined on the computer without a lot of fuss, even if it takes a lot of time; it takes no guesses and it takes no assumptions. However, a non discrete problem is a completely different animal in that it DOES take guesses and it DOES take assumptions to be able to solve the problem; often it takes multiple times through the problem from multiple different directions, and the answer the computer determines is a convergence of these attempts where it subsequent attempt has a much lower difference from another attempt: it finds a minimum in differences in the answer, it converges to the answer. Is this the true minimum, or just some minimum it found (a local minimum) on its way to determine the true minimum? It all depends on the number of times it tries to find the "answer" and whether or not it finds one that fits the minimization better. So, where does it start and how does it determine a starting point? Here's where the chaotic nature of the universe steals itself into it: the program starts with a value as determined usually by a random number generator of some type (which aren't truly random, as some will tell you). The answer we get depends on the starting point and the assumptions we make. If the assumptions are wrong, the answer is most likely wrong as well.

AI, Artificial Intelligence, is just a computer program. Hell, the definition of intelligence changes from day to day! Intelligence is NOT a discrete problem! It has to start with something, it has to use the assumptions we give it. In addition, it is a MODEL of the way real intelligence works.

There are two types of models: supervised and unsupervised. Supervised models are those that we give the parameters to the model and it finds the answers for us. They are strict, and they require human supervision to work: we give it data and it uses that data as we have labelled it, never going outside of its "lane." Thus, when it gives an answer, we are assured that the answer derives from the data we have given it; if we have mislabelled something, or miscategorized something, it is due to the nature of the data WE GAVE IT. Unsupervised, on the other hand, also uses the data we have given it but it finds correlations and relationships within the data that we have not specifically told it, hence the "unsupervised" part. We may very well be unaware of the correlations and relationships, or these correlations and relationships may be spurious. But we didn't knowingly give it those correlations and relationships, it "learned" them on its own. Hence, if the data contains biases then the model will be biased.

The kicker is that AI is an unsupervised model. And if the model isn't built to provide context, it won't, which makes it difficult to verify the answer it gave, depending on the complexity of the problem you posed. The nature of the hidden biases in the data may make it nearly impossible for you to determine why the answer is wrong.

Is it a "true" model? That all depends on how you want to evaluate it. This leads me to some questions, not necessarily in the order presented:

1. Is this model useful? How?
2. What are the limitations of the model? Do we acknowledge those limitations? And will we stay within them?
3. Do we really want to add the model to something that can potentially be life threatening?
4. Is the model sufficiently mature so we can depend on it in a potentially life threatening situation?
5. Does this model actually give moral answers to immoral questions it may be posed? Now you are getting outside of the parameters that the model can handle and as the model isn't moral or immoral, I don't think it can make the call. See question 2 above.

In short:

AI is not to the point of being able to discern what the user may want the information for. It is not able to make a judgement as to whether or not the question is legal. It just provides answers based on information it is given. And the answers it gives may or may not make sense, either. It cobbles together things it "finds" in the databases it uses that appear to be connected in some manner. Thus, if you decide to use the information that this model gives you and that information is wrong, or the AI model hallucinated and gave a nonsense answer (i.e., pulled information out of it's collective backside) and you went with it, if your actions result in something bad, then the liability should rest on you for your actions. If AI is being inserted into society (as it appears to be) with the idea that it can and will make our lives easier and better, and we find that it isn't, then yes, the people developing the AI model should be held liable for putting out a product that wasn't ready to be released yet.

AI is not authoritative. It cannot judge the information given to it, nor can it judge the results it may spew out. I don't think it can do the things that people want it to do, not fully and certainly not reliably in its current iteration. We need to continue looking at it, researching it, investigating it, giving it situations and finding out what it will do or can do or where the bias is in the system that we inadvertently built into it. I can see situations that may require AI, but that is not now. AI isn't ready.

AI is a tool, and in the hands of a human, any tool can be used for disastrous results. Understanding AI is also a tool, and as such we need to understand this model we have built or it can and most likely will be used for disastrous results.

Confusedmiley-score010:


Re: "Why some tech leaders are so worried about a California AI safety bill." - pdq - 06-17-2024

Limitations probably need to be worldwide, because stopping it in California ain’t gonna stop AI.

There really ought to be a multinational effort/monitoring agency (if there isn’t already - I dunno) to follow and regulate AI worldwide, something akin to human cloning efforts. That’s kind of a poor comparison, since cloning would be more difficult to do surreptitiously and since honestly, it’s far less of a risk to humanity than AI. (Identical twins are clones, they’ve lived among us since the dawn of time, and don’t seem to be particularly nefarious.)


Re: "Why some tech leaders are so worried about a California AI safety bill." - mrbigstuff - 06-18-2024

Diana, an epic answer, old school!


Re: "Why some tech leaders are so worried about a California AI safety bill." Long response - vision63 - 06-18-2024

Diana wrote:
First, I haven't searched for anything regarding this on the web. The following are just my thoughts.

Secondly, I deal with models on a daily basis. They are neither inherently good, nor bad, but a way that we put together information that may yield a desired result. Are they biased? Hell yes. I don't think we can make them completely non biased, but it's worth trying.

So, here goes.

The nature of the AI assistant is more like a search engine rather than a car. But these are two very different technologies.

It doesn't matter how much computer stuff you throw into the car, a "car" is a mechanical item that produces a predictable response to the given stimuli. For instance, you press on the accelerator and the car goes faster, or the engine revs up; you press the brake and the car slows, or stops. The exact method of how these events happen is not that relevant to this discussion: a car is a mechanical item wherein there is very little or no influence of the chaotic nature of the rest of the world. (Before you bristle at the "chaotic nature" statement, read on.) The only chaotic things that happen with a car are due to the nature of the failure of a part, or in the operating of such car (the idiot behind the wheel). Parts that fail need to be replaced--this is called maintenance. Thus, if the car ultimately leads to someone getting killed, it is something that is often litigated, someone gets sued, someone will pay.

A basic computer is also a mechanical item: very predictable in its response. Turn on this particular chip and something happens; turn it off and another thing happens. If it fails the computer dies, and it often doesn't kill people. If it were to explode, and kills someone that way, perhaps someone would litigate and perhaps someone would have to pay. But the basic computer is just an object, a tool to be used.

Now, the nature of the computer is that is needs to be programmed (yeah, I know you know this stuff). The programming can be very discrete in its nature, or not so much. A discrete natured problem can be determined on the computer without a lot of fuss, even if it takes a lot of time; it takes no guesses and it takes no assumptions. However, a non discrete problem is a completely different animal in that it DOES take guesses and it DOES take assumptions to be able to solve the problem; often it takes multiple times through the problem from multiple different directions, and the answer the computer determines is a convergence of these attempts where it subsequent attempt has a much lower difference from another attempt: it finds a minimum in differences in the answer, it converges to the answer. Is this the true minimum, or just some minimum it found (a local minimum) on its way to determine the true minimum? It all depends on the number of times it tries to find the "answer" and whether or not it finds one that fits the minimization better. So, where does it start and how does it determine a starting point? Here's where the chaotic nature of the universe steals itself into it: the program starts with a value as determined usually by a random number generator of some type (which aren't truly random, as some will tell you). The answer we get depends on the starting point and the assumptions we make. If the assumptions are wrong, the answer is most likely wrong as well.

AI, Artificial Intelligence, is just a computer program. Hell, the definition of intelligence changes from day to day! Intelligence is NOT a discrete problem! It has to start with something, it has to use the assumptions we give it. In addition, it is a MODEL of the way real intelligence works.

There are two types of models: supervised and unsupervised. Supervised models are those that we give the parameters to the model and it finds the answers for us. They are strict, and they require human supervision to work: we give it data and it uses that data as we have labelled it, never going outside of its "lane." Thus, when it gives an answer, we are assured that the answer derives from the data we have given it; if we have mislabelled something, or miscategorized something, it is due to the nature of the data WE GAVE IT. Unsupervised, on the other hand, also uses the data we have given it but it finds correlations and relationships within the data that we have not specifically told it, hence the "unsupervised" part. We may very well be unaware of the correlations and relationships, or these correlations and relationships may be spurious. But we didn't knowingly give it those correlations and relationships, it "learned" them on its own. Hence, if the data contains biases then the model will be biased.

The kicker is that AI is an unsupervised model. And if the model isn't built to provide context, it won't, which makes it difficult to verify the answer it gave, depending on the complexity of the problem you posed. The nature of the hidden biases in the data may make it nearly impossible for you to determine why the answer is wrong.

Is it a "true" model? That all depends on how you want to evaluate it. This leads me to some questions, not necessarily in the order presented:

1. Is this model useful? How?
2. What are the limitations of the model? Do we acknowledge those limitations? And will we stay within them?
3. Do we really want to add the model to something that can potentially be life threatening?
4. Is the model sufficiently mature so we can depend on it in a potentially life threatening situation?
5. Does this model actually give moral answers to immoral questions it may be posed? Now you are getting outside of the parameters that the model can handle and as the model isn't moral or immoral, I don't think it can make the call. See question 2 above.

In short:

AI is not to the point of being able to discern what the user may want the information for. It is not able to make a judgement as to whether or not the question is legal. It just provides answers based on information it is given. And the answers it gives may or may not make sense, either. It cobbles together things it "finds" in the databases it uses that appear to be connected in some manner. Thus, if you decide to use the information that this model gives you and that information is wrong, or the AI model hallucinated and gave a nonsense answer (i.e., pulled information out of it's collective backside) and you went with it, if your actions result in something bad, then the liability should rest on you for your actions. If AI is being inserted into society (as it appears to be) with the idea that it can and will make our lives easier and better, and we find that it isn't, then yes, the people developing the AI model should be held liable for putting out a product that wasn't ready to be released yet.

AI is not authoritative. It cannot judge the information given to it, nor can it judge the results it may spew out. I don't think it can do the things that people want it to do, not fully and certainly not reliably in its current iteration. We need to continue looking at it, researching it, investigating it, giving it situations and finding out what it will do or can do or where the bias is in the system that we inadvertently built into it. I can see situations that may require AI, but that is not now. AI isn't ready.

AI is a tool, and in the hands of a human, any tool can be used for disastrous results. Understanding AI is also a tool, and as such we need to understand this model we have built or it can and most likely will be used for disastrous results.

Now I need a hug.


Re: "Why some tech leaders are so worried about a California AI safety bill." Long response - Diana - 06-18-2024

vision, you’re *hugged*.

Confusedympathy:


Re: "Why some tech leaders are so worried about a California AI safety bill." Long response - vision63 - 06-18-2024

Diana wrote:
vision, you’re *hugged*.

Confusedympathy:

That feels good... :emoticon_love: