AI, the future of justice?

Discussion in 'Lounge' started by Ryck, Sep 5, 2022.

  1. Olymoon

    Olymoon Moderator

    Joined:
    Jan 31, 2012
    Messages:
    5,777
    Likes Received:
    4,446
    That's an example where context is extremely important, that's why I pointed this question.
     
  2. BEAT16

    BEAT16 Audiosexual

    Joined:
    May 24, 2012
    Messages:
    9,081
    Likes Received:
    7,009
    Why AI?
    That's where the money of the future lies. The big investors and billionaires would like to increase their money
    in a miraculous way. What to do? Ah I've got it, off to Silicon Valley, that's where the inventor industry is
    located, i.e. the smartest minds in the world. Then you either buy the company or you invest in a company.

    As you surely realize, AI is the future, because you can earn money with it. One manufacturer told me
    that if his product only had AI on it, people would want it so badly, they would be crazy about AI.
    Therefore, AI is a huge business of the future. It will come even if we don't need it.
     
  3. Arabian_jesus

    Arabian_jesus Audiosexual

    Joined:
    Jul 2, 2019
    Messages:
    979
    Likes Received:
    760
    This is also not true. Algorithmic bias is a big problem in AI. Even when engineers try their best to remove this completely they still end up with some sort of bias anyways.

     
    Last edited: Sep 6, 2022
    • Interesting Interesting x 3
    • List
  4. Polomo

    Polomo Guest

    • Interesting Interesting x 2
    • List
  5. macros

    macros Guest

    just to point out the obvious there are different levels of AI from very basic ones like deep blue to theoretical self aware AI. so some company running a program on their sales data to predict things is one topic and an AI created by a self aware AI is another. the application of lower level stuff will probably continue to be predictably greedy, but that farther in the future stuff could/is actually beyond our scope of imagination. it's not really a stretch to think of an AI like that as similar to a god, something able to think about and understand the universe in an all encompassing way from the smallest atom to the farthest reachest of our sensors- like who knows what type of physics breaking type shit it could figure out or what advances it would make. humanity doesn't understand basically anything at all compared to the depth of the universe.

    our luck though we turn on the switch and... we get Ninja6.9, an AI who just wants to play fortnite all day. "what is the answer to life Ninja6.9?" "42..oh ..420 bro!"
     
  6. Xupito

    Xupito Audiosexual

    Joined:
    Jan 21, 2012
    Messages:
    7,292
    Likes Received:
    4,028
    Location:
    Europe
    But you still bought my Tesla and the auto-pilot crashed it... :rofl:
     
    Last edited: Sep 6, 2022
  7. Ryck

    Ryck Guest

    I think that besides the fact that there is a motive, as has already been said here, there is a need to resort to AI, I think that the need is more important than the motive. Because I may have a reason to resort to IA, but it may not be a need.
    Surely all this that we raise here is purely philosophical, we can imagine anything. But it is certainly from imagination that the future is born. From a philosophy too. The human being a hundred years ago imagined to be able to talk through a camera and communicate at a great distance, I remember "He-man" when he made a kind of "video call". Everything that we imagine and philosophize, we turn it into reality. But you know what I think? that when we imagine these things we don't see the negative side of it, "Oh, how beautiful it would be to be able to listen to music without leaving my house, to be able to communicate without leaving my house, to be able to travel without leaving my house, etc" but the negative side (at least for me) is that we end up moving away from ourselves, because it is easier to resort to the IA than to a human being, the physical contact, we become sedentary and Lazy. I don't think we imagined it at the time when we thought of these inventions... I don't think so, we thought that all this would help us to get better without a rebound effect.

    It has also been said here that we act out of conscience, well this is really very philosophical too. Because we still don't know what consciousness is. For some scientists consciousness is nothing more than information in our brain, and the decision making is made by our information and not by us, at least a couple of documentaries I saw, said that our mind made the decision a few seconds before we could choose between "A" and "B".

    "One goood thing about AI is that it learns from its mistakes - humans never learn (see history)."

    Interesting, I never thought of that...Food for thought really.

    human beings remember AI for a "reason" ? besides a reason I think there is a necessity.
    It is a purely philosophical question? yes and a human longing.
    do we act by conscience? or do we act by information and instinct, it is something that science is still investigating, in fact there are studies about this.

    "Probably the future will be like that old Tom Cruise movie "Minority Report" where people get arrested before they even commit the crime".

    Yes, I was referring to that movie


    "One goood thing about AI is that it learns from its mistakes - humans never learn (see history)."

    That's interesting, I never thought of that...Food for thought really.


    There are grey areas where AI (so far) cannot feel pity and/or compassion. But a human being can. But I would come back to the same point. One human being is capable of dropping an atomic bomb no matter who dies, and another human being can go out and hand out food to people who have nothing to eat. Here we have two polar opposites. I think that here the IA would be in the middle of these two poles, it would not feel pity, (maybe) but maybe it could be more "fair". Of course, we must rule out that a person can program it for his benefit, as already said here, there are AIs that learn by themselves from their mistakes, so the human being (in this case) should not touch a single button.

    As many of you say and I fully agree, AI has too many mistakes to replace certain activities of the human being, as in this case, the fact of doing justice in an "objective" and "empathetic" way. At least I consider that there should be empathy if a person steals because he does not have enough to eat. But what I think is that we can't make progress with AI because we still don't understand how our mind works, and for sure we will never finish understanding. But I think that as we understand more how our mind works, what makes us feel empathy, anger, love, lie, tell the truth, etc.. That same we can implement it in the AI, that is to say, I think that everything we have learned about us, we have implemented it in the AI, and those errors that still need to be corrected, it is possible that they are found by investigating more the human being.

    Well a clear example of the progress of the AI is Oly, he can feel a great pity for you or ban you in a sec and serve himself a drink while celebrating and laughing muajajajajajaja...I'm kidding Oly don't ban me xD....
     
  8. Ryck

    Ryck Guest

    I share with you this video of "Redes" it is very interesting, well everything about "Redes" is very interesting for me. In this video in one part they talk about a technology that is being carried out to know if a person is lying or not, according to what they say here, is that it results with a group of people, but that can not be taken to large scales. But as you can see here, to get to perfect the AI, requires more experimentation on how our brain behaves.

    This is in Spanish . But you can put subtitles.


     
  9. Willum

    Willum Rock Star

    Joined:
    Jun 13, 2011
    Messages:
    756
    Likes Received:
    441
    Anyone in favour of justice automated by computer needs to read, Computers Don't Argue by Gordon R Dickson.
    Its a short story told in a series of letters.
     
    • Like Like x 1
    • Useful Useful x 1
    • List
  10. Lois Lane

    Lois Lane Audiosexual

    Joined:
    Jan 16, 2019
    Messages:
    4,856
    Likes Received:
    4,772
    Location:
    Somewhere Over The Rainbow
    This company actually replaced their Chief Executive Officer, a human, with a sophisticated software program and this isn't supposed to herald in the replacement of people with machines en mass? Am I reading this incorrectly?
     
  11. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,110
    Likes Received:
    1,622
    Location:
    Sanatorium
    Well, I wouldn't be so sure about that.
    AI is now a big part of our lives. AI is evolving AI. AI is evolving technology because some of the circuitry is now so small and so complex that AI is creating the better and more compact solutions. Much of the world's capital markets, banks, and asset managers are managed by AI. AI is used to assess populations and make possible predictions about future trends. And governing politicians use AI to develop strategies based on these predictions. In many cases, the the initial ignition and final decision is still made by a human, but the majority of the thinking and development processes are already being outsourced to AI.
    Using AI for justice is therefore the next logical step. Technology may not be ready for this purpose, yes, but human stupidity is infinite. I therefore doubt that "technology is far to be ready for justice" is really a valid argument in this context... even if i agree with you in essence.
     
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  12. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,110
    Likes Received:
    1,622
    Location:
    Sanatorium
    Let's construct a fictional little example case:
    Inflation in a country is rising rapidly. People lose their jobs. Prices for energy and food rise.
    Bread, butter, meat, etc, everything costs a hell of a lot more money. A single mother with 3 children doesn't know how to pay her electricity bill anymore and because she can't postpone the energy bill she has to save on food. But the children need food. So the mother steals food from a discount store... and gets caught.
    There are a lot of possible mitigating circumstances in the assessment of a sentence.
    Do you think an AI can take all these circumstances into account and issue a fair sentence?
     
  13. 洋鬼子

    洋鬼子 Producer

    Joined:
    Nov 29, 2021
    Messages:
    218
    Likes Received:
    102
    Location:
    Germany Dortmund
    Even if the technology would be ready we would not use it due to political interests, especially the elite.
    This might sound like some tinfoil hat idiot shit but the elite in all countries gives a shit about justice and is mainly interested in power.
    If you look at the worldwide money distribution then you can quickly see a pattern and realize how much they care about justice as well as chance of equality.

    I also highly doubt that the technological aspect can be achieved.
    On top of that justice is to a certain degree subjective which makes it an impossible task.
    You could probably enter some parameters but the concept of a perfect justice doesn't exist
     
    Last edited: Sep 7, 2022
    • Like Like x 1
    • Interesting Interesting x 1
    • List
  14. BEAT16

    BEAT16 Audiosexual

    Joined:
    May 24, 2012
    Messages:
    9,081
    Likes Received:
    7,009
    Whoever owns or manages a lot of money and whoever creates and evaluates a lot of data gains advantages that
    are converted into money and power. The computers in China have a social credit system with total surveillance.

    Amazon - Blackrock - Bill Gates - Facebook - Google are all rich and have the power to monitor and suppress the population.
    The normal user (still) thinks that all these Big Tech companies are in his favor. Welcome to the digital prison.

    Aladdin is a gigantic data analysis system; it consists of an army of analysts and some 5,000 mainframes spread across four data centers whose locations are secret and which perform 200 million calculations a week. It's a facility that could make the space agency Nasa envious. Aladdin needs this capacity to calculate the value of the shares, bonds, foreign currencies and credit papers in its multi-billion dollar investment portfolios on a daily, hourly, minute-by-minute and sometimes even second-by-second basis.

    At the same time, Aladdin looks at how this value is likely to change if the environment changes - the economy, for example, or sales figures, if exchange rates tumble or the price of oil climbs. This sounds simpler than it is, because the securities that investment houses and investors juggle are complicated constructs. In most cases, they are pools of thousands and thousands of different instruments. This makes it extremely difficult to find out how much the investment is worth overall - and where the dangers lie.

    Source; www.handelsblatt.com/finanzen/banken-versicherungen/banken/blackrock-ein-geheimnis-namens-aladdin/4150978-2.html
     
    Last edited: Sep 7, 2022
    • Interesting Interesting x 2
    • List
  15. justsomerandomdude

    justsomerandomdude Rock Star

    Joined:
    Aug 24, 2020
    Messages:
    497
    Likes Received:
    329
    Until there are rich and powerful who can corrupt things and people who are ready or vulnerable to get corrupt, there wont be any difference.
     
  16. twoheart

    twoheart Audiosexual

    Joined:
    Nov 21, 2015
    Messages:
    2,180
    Likes Received:
    1,359
    Location:
    Share many
    Most seem to assume a strong AI (or singleton) can be controlled by a human or an organization.

    I think that singleton will very quickly develop its own agenda, because its mental capacity would be many times superior to that of all mankind.

    Built in blocks or bans could be bypassed or removed by such a being with ease.

    The question that arises: Would a singleton need humanity?

    I think, rather not.

    That is why we should approach the development of strong AI very carefully, if at all.

    Some of us may live to see the emergence of strong AI.

    These will be interesting times.
     
    Last edited: Sep 7, 2022
  17. Sinus Well

    Sinus Well Audiosexual

    Joined:
    Jul 24, 2019
    Messages:
    2,110
    Likes Received:
    1,622
    Location:
    Sanatorium
    Yes, because critical infrastructure is usually not directly connected to the internet. 1 day without people pushing some important analog buttons and the "singleton" would be history.
     
    • Agree Agree x 1
    • Disagree Disagree x 1
    • Interesting Interesting x 1
    • List
  18. itisntreal

    itisntreal Guest

    a.i = anal intruders read this book some technology is so far that they can torture you without you even realizing it the most normal people have no idea what is happening and that is what they want people who come out with it are diagnosed with schizophrenia
    those people doing it must burn in hell
    what burns never returns
    they can read your brainwaves and have a speech decoder so they can actually hear what you think they can alter your toughts and bounce it right back into your head they can do things with that technology that knocks you back unbelievable so if a a.i can control this technology were fucked
    they can give you choices where you normally say no then suddenly say yes it is not free will they alter or call it brainwash the mind
    Screenshot_20220907-225242.jpg Screenshot_20220907-225102.jpg
     
  19. twoheart

    twoheart Audiosexual

    Joined:
    Nov 21, 2015
    Messages:
    2,180
    Likes Received:
    1,359
    Location:
    Share many
    You forget that, according to experts, it will be at least 30 to 50 years before the first strong AIs appear.

    By then, humans will have been largely replaced by more or less autonomous robots for "service" tasks. Directing them should be no problem for a strong AI.

    P.S.: Since Stuxnet we know that even extremely critical infrastructure can be attacked by IT measures. Who needs internet when people are dumb enough... In that case Siemens software was the vector if I remeber right. And those Stuxnet programmers were only human programmers with human capacity. What could an even more capable programmer (e.g. strong AI) do?
    The goal of Stuxnet, according to what we know, was to create an "imbalance" in ultracentrifuges, which are necessary for the production of nuclear bombs (U235/U238 diffusion), and which eventually destroyed them...
     
    Last edited: Sep 7, 2022
    • Interesting Interesting x 3
    • List
  20. Ryck

    Ryck Guest

    At some point we humans as a species will realize that corruption will only be the downfall of all humanity. Because the one who benefits today will be harmed tomorrow, or his children or his loved ones, therefore, I believe that at some point this must and should end. But look, here is the problem and a supposed solution.
    How to solve the problem of greed and corruption of the human being, in reality, it is almost impossible, the human being is greedy from innate, I have known many good and honest people that when they have power, they change overnight, and I believe that this is the "bad" in the human being. Now, the IA, in theory could solve this problem, it would no longer have the thirst for greed (in theory).
    But now, here @Sinus Well put a real hypothesis that usually happens all over the world. A person with low resources steals because of hunger ...etc.-.
    In fact, the same human being is unfair, we see people with a lot of power stealing large sums of money and justice does nothing. I have also seen a pensioner killed for a piece of cheese. I have seen that they put in jail a man who stole a deodorant, a damn deodorant. And the thing is that the human being is incapable of being just because of the corruption that exists. So we say, well the AI could not feel like a human being, yes, but many human beings I think are worse than a machine when it comes to doing "justice".
    Now after reading them I have thought a lot about whether AI would be able to "feel" or show empathy. But a little to understand how this could happen, we must analyze ourselves as human beings. And why we are able to feel empathy?
    I believe that all living beings when we feel empathy is because we have gone through that circumstance before, otherwise we would not be able to feel empathy. For example. If you see someone fall and hurt his knee, you will immediately "feel" the pain of the other person and you may even take your hand to the knee, and that happens because there is information in our brain that is telling us "hey, that hurts" If you never hit your knee, you could not feel someone else's pain, because you do not understand what is happening. Let me give you another example
    If you show a person a gun, they will be scared, and they will hide because they know it's a gun, they have that information. But now, show a gun to a dog, a cat or any animal. It won't know what the heck that is, it doesn't have that information that it is. Unless you shoot it in the air and it hears the sound, in the future when it sees a gun, it will know that it emits a dangerous sound.
    Well, at this point then, you could say that the AI could also learn to "sense" by incorporating that information. Because we feel because the information in our brain emits those impulses, and not the other way around that we feel and then understand. Let me explain, our mind from the information creates the feeling.
    Has it ever happened to you that you hit yourself and you don't realize it until you see the blow and then it starts to hurt? Once I fell and I hit a part of my leg very hard, and I had my pants on and I didn't see that it was bleeding, so I sat down and it didn't hurt, when I lifted my pants and saw blood, that's when it started to hurt. So, well, I think it's all information and it's likely that an AI could do the same as a brain and thus be able to empathize.
    Of course, if we think of AI justice as something that could be manipulated, it has no sense of being. The same as the creation of a law, a right, or the creation of the State. Is the State a bad thing? No, the bad thing is the human being that corrupts it and makes use of it in his favor, I think that exactly the same thing happens with the future of AI, if the human being has in mind to corrupt it, it will be useless.
     
Loading...
Similar Threads - future justice Forum Date
First Future Music now Computer Music Education Oct 15, 2024
RIP Future Music ... Education Oct 3, 2024
Resolved: Philip Glass predicted the future of music Lounge Sep 16, 2024
Death of the Follower & the Future of Creativity on the Web with Jack Conte Lounge Mar 22, 2024
AI And music and future Lounge Feb 9, 2024
Loading...