06-17-2024, 12:38 AM
Diana wrote:
First, I haven't searched for anything regarding this on the web. The following are just my thoughts.
Secondly, I deal with models on a daily basis. They are neither inherently good, nor bad, but a way that we put together information that may yield a desired result. Are they biased? Hell yes. I don't think we can make them completely non biased, but it's worth trying.
So, here goes.
The nature of the AI assistant is more like a search engine rather than a car. But these are two very different technologies.
It doesn't matter how much computer stuff you throw into the car, a "car" is a mechanical item that produces a predictable response to the given stimuli. For instance, you press on the accelerator and the car goes faster, or the engine revs up; you press the brake and the car slows, or stops. The exact method of how these events happen is not that relevant to this discussion: a car is a mechanical item wherein there is very little or no influence of the chaotic nature of the rest of the world. (Before you bristle at the "chaotic nature" statement, read on.) The only chaotic things that happen with a car are due to the nature of the failure of a part, or in the operating of such car (the idiot behind the wheel). Parts that fail need to be replaced--this is called maintenance. Thus, if the car ultimately leads to someone getting killed, it is something that is often litigated, someone gets sued, someone will pay.
A basic computer is also a mechanical item: very predictable in its response. Turn on this particular chip and something happens; turn it off and another thing happens. If it fails the computer dies, and it often doesn't kill people. If it were to explode, and kills someone that way, perhaps someone would litigate and perhaps someone would have to pay. But the basic computer is just an object, a tool to be used.
Now, the nature of the computer is that is needs to be programmed (yeah, I know you know this stuff). The programming can be very discrete in its nature, or not so much. A discrete natured problem can be determined on the computer without a lot of fuss, even if it takes a lot of time; it takes no guesses and it takes no assumptions. However, a non discrete problem is a completely different animal in that it DOES take guesses and it DOES take assumptions to be able to solve the problem; often it takes multiple times through the problem from multiple different directions, and the answer the computer determines is a convergence of these attempts where it subsequent attempt has a much lower difference from another attempt: it finds a minimum in differences in the answer, it converges to the answer. Is this the true minimum, or just some minimum it found (a local minimum) on its way to determine the true minimum? It all depends on the number of times it tries to find the "answer" and whether or not it finds one that fits the minimization better. So, where does it start and how does it determine a starting point? Here's where the chaotic nature of the universe steals itself into it: the program starts with a value as determined usually by a random number generator of some type (which aren't truly random, as some will tell you). The answer we get depends on the starting point and the assumptions we make. If the assumptions are wrong, the answer is most likely wrong as well.
AI, Artificial Intelligence, is just a computer program. Hell, the definition of intelligence changes from day to day! Intelligence is NOT a discrete problem! It has to start with something, it has to use the assumptions we give it. In addition, it is a MODEL of the way real intelligence works.
There are two types of models: supervised and unsupervised. Supervised models are those that we give the parameters to the model and it finds the answers for us. They are strict, and they require human supervision to work: we give it data and it uses that data as we have labelled it, never going outside of its "lane." Thus, when it gives an answer, we are assured that the answer derives from the data we have given it; if we have mislabelled something, or miscategorized something, it is due to the nature of the data WE GAVE IT. Unsupervised, on the other hand, also uses the data we have given it but it finds correlations and relationships within the data that we have not specifically told it, hence the "unsupervised" part. We may very well be unaware of the correlations and relationships, or these correlations and relationships may be spurious. But we didn't knowingly give it those correlations and relationships, it "learned" them on its own. Hence, if the data contains biases then the model will be biased.
The kicker is that AI is an unsupervised model. And if the model isn't built to provide context, it won't, which makes it difficult to verify the answer it gave, depending on the complexity of the problem you posed. The nature of the hidden biases in the data may make it nearly impossible for you to determine why the answer is wrong.
Is it a "true" model? That all depends on how you want to evaluate it. This leads me to some questions, not necessarily in the order presented:
1. Is this model useful? How?
2. What are the limitations of the model? Do we acknowledge those limitations? And will we stay within them?
3. Do we really want to add the model to something that can potentially be life threatening?
4. Is the model sufficiently mature so we can depend on it in a potentially life threatening situation?
5. Does this model actually give moral answers to immoral questions it may be posed? Now you are getting outside of the parameters that the model can handle and as the model isn't moral or immoral, I don't think it can make the call. See question 2 above.
In short:
AI is not to the point of being able to discern what the user may want the information for. It is not able to make a judgement as to whether or not the question is legal. It just provides answers based on information it is given. And the answers it gives may or may not make sense, either. It cobbles together things it "finds" in the databases it uses that appear to be connected in some manner. Thus, if you decide to use the information that this model gives you and that information is wrong, or the AI model hallucinated and gave a nonsense answer (i.e., pulled information out of it's collective backside) and you went with it, if your actions result in something bad, then the liability should rest on you for your actions. If AI is being inserted into society (as it appears to be) with the idea that it can and will make our lives easier and better, and we find that it isn't, then yes, the people developing the AI model should be held liable for putting out a product that wasn't ready to be released yet.
AI is not authoritative. It cannot judge the information given to it, nor can it judge the results it may spew out. I don't think it can do the things that people want it to do, not fully and certainly not reliably in its current iteration. We need to continue looking at it, researching it, investigating it, giving it situations and finding out what it will do or can do or where the bias is in the system that we inadvertently built into it. I can see situations that may require AI, but that is not now. AI isn't ready.
AI is a tool, and in the hands of a human, any tool can be used for disastrous results. Understanding AI is also a tool, and as such we need to understand this model we have built or it can and most likely will be used for disastrous results.
