Check out the TB2K CHATROOM, open 24/7               Configuring Your Preferences for OPTIMAL Viewing
  To access our Email server, CLICK HERE

  If you are unfamiliar with the Guidelines for Posting on TB2K please read them.      ** LINKS PAGE **



*** Help Support TB2K ***
via mail, at TB2K Fund, P.O. Box 24, Coupland, TX, 78615
or


ALERT How We Can Prepare for Catastrophically Dangerous AI—and Why We Can't Wait
+ Reply to Thread
Page 1 of 2 1 2 LastLast
Results 1 to 40 of 50
  1. #1
    Join Date
    Nov 2003
    Location
    Between Holy & Crap
    Posts
    120,407

    4 How We Can Prepare for Catastrophically Dangerous AI—and Why We Can't Wait

    Matthew 24:22

    King James Version
    And except those days should be shortened, there should no flesh be saved: but for the elect's sake those days shall be shortened.

    -------------------------



    How We Can Prepare for Catastrophically Dangerous AI—and Why We Can't Wait

    Yesterday 10:10am

    Artificial intelligence in its current form is mostly harmless, but that’s not going to last. Machines are getting smarter and more capable by the minute, leading to concerns that AI will eventually match, and then exceed, human levels of intelligence—a prospect known as artificial superintelligence (ASI).

    As a technological prospect, ASI will be unlike anything we’ve ever encountered before. We have no prior experience to guide us, which means we’re going to have to put our collective heads together and start preparing. It’s not hyperbole to say humanity’s existence is at stake—as hard as that is to hear and admit—and given there’s no consensus on when we can expect to meet our new digital overlords, it would be incumbent upon us to start preparing now for this possibility, and not wait until some arbitrary date in the future.

    On the horizon

    Indeed, the return of an AI winter, a period in which innovations in AI slow down appreciably, no longer seems likely. Fueled primarily by the powers of machine learning, we’ve entered into a golden era of AI research, and with no apparent end in sight. In recent years we’ve witnessed startling progress in areas such as independent learning, foresight, autonomous navigation, computer vision, and video gameplay. Computers can now trade stocks on the order of milliseconds, automated cars are increasingly appearing on our streets, and artificially intelligent assistants have encroached into our homes. The coming years will bear witness to further advancements, including AI that can learn through its own experiences, adapt to novel situations, and comprehend abstractions and analogies.
    “By definition, ASI will understand the world and humans better than we understand it, and it’s not obvious at all how we could control something like that.”

    We don’t know when ASI will arise or what form it’ll take, but the signs of its impending arrival are starting to appear.

    Last year, for example, in a bot-on-bot Go tournament, DeepMind’s AlphaGo Zero (AGZ), defeated the regular AlphaGo by a score of 100 games to zero. Incredibly, AGZ required just three days to train itself from scratch, during which time it acquired literally thousands of years of human Go playing experience. As the DeepMind researchers noted, it’s now “possible to train [machines] to superhuman level, without human examples or guidance, given no knowledge of the domain beyond basic rules.” It was a stark reminder that developments in this field are susceptible to rapid, unpredictable, and dramatic improvements, and that we’ve entered into a new era—the age of superintelligent machines.

    “According to several estimates, supercomputers can now—or in the near future—do more elementary operations per second than human brains, so we might already have the necessary hardware to compete with brains,” said Jaan Tallinn, a computer programmer, founding member of Skype, and co-founder of the Centre for the Study of Existential Risk, a research center at the University of Cambridge concerned with human extinction scenarios. “Also, a lot of research effort is now going into ‘meta learning’—that is, developing AIs that would be able to design AIs. Therefore, progress in AI might at some point simply decouple from human speeds and timelines.”

    These and other developments will likely lead to the introduction of AGI, otherwise known as artificial general intelligence. Unlike artificial narrow intelligence (ANI), which is super good at solving specialized tasks, like playing Go or recognizing human faces, AGI will exhibit proficiency across multiple domains. This powerful form of AI will be more humanlike in its abilities, adapting to new situations, learning a wide variety of skills, and performing an extensive variety of tasks. Once we achieve AGI, the step to superintelligence will be a short one—especially if AGIs are told to create increasingly better versions of themselves.

    “It is difficult to predict technological advancement, but a few factors indicate that AGI and ASI might be possible within the next several decades,” said Yolanda Lannquist, AI policy researcher at The Future Society, a non-profit think tank concerned with the impacts of emerging technologies. She points to companies currently working towards AGI development, including Google DeepMind, GoodAI, Araya Inc., Vicarious, SingularityNET, among others, including smaller teams and individuals at universities. At the same time, Lannquist said research on ANI may lead to breakthroughs towards AGI, with companies such as Facebook, Google, IBM, Microsoft, Amazon, Apple, OpenAI, Tencent, Baidu, and Xiaomi among the major companies currently investing heavily in AI research.

    With AI increasingly entering into our lives, we’re starting to see unique problems emerge, particularly in areas of privacy, security, and fairness. AI ethics boards are starting to become commonplace, for example, along with standards projects to ensure safe and ethical machine intelligence. Looking ahead, we’re going to have to deal with such developments as massive technological unemployment, the rise of autonomous killing machines (including weaponized drones), AI-enabled hacking, and other threats.

    Beyond human levels of comprehension and control

    But these are problems of the present and the near future, and most of us can agree that measures should be taken to mitigate these negative aspects of AI. More controversially, however, is the suggestion that we begin preparing for the advent of artificial superintelligence—that heralded moment when machines surpass human levels of intelligence by several orders of magnitude. What makes ASI particularly dangerous is that it will operate beyond human levels of control and comprehension. Owing to its tremendous reach, speed, and computational proficiency, ASI will be capable of accomplishing virtually anything it’s programmed or decides to do.
    Conjuring ASI-inspired doomsday scenarios is actually quite besides the point; we already have it within our means to destroy ourselves.

    There are many extreme scenarios that come to mind. Armed with superhuman powers, an ASI could destroy our civilization, either by accident, misintention, or deliberate design. For instance, it could turn our planet into goo after a simple misunderstanding of its goals (the allegorical paperclip scenario is a good example), remove humanity as a troublesome nuisance, or wipe out our civilization and infrastructure as it strives to improve itself even further, a possibility AI theorists refer to as recursive self-improvement. Should humanity embark upon an AI arms race with rival nations, a weaponized ASI could get out of control, either during peacetime or during war. An ASI could intentionally end humanity by destroying our planet’s atmosphere or biosphere with self-replicating nanotechnology. Or it could launch all our nuclear weapons, spark a Terminator-style robopocalypse, or unleash some powers of physics we don’t even know about. Using genetics, cybernetics, nanotechnology, or other means at its disposal, an ASI could reengineer us into blathering, mindless automatons, thinking it was doing us some sort of favor in an attempt to pacify our violent natures. Rival ASIs could wage war against themselves in a battle for resources, scorching the planet in the process.

    Clearly, we have no shortage of ideas, but conjuring ASI-inspired doomsday scenarios is actually quite besides the point; we already have it within our means to destroy ourselves, and it won’t be difficult for ASI to come up with its own ways to end us.

    The prospect admittedly sounds like science fiction, but a growing number of prominent thinkers and concerned citizens are starting to take this possibility very seriously.

    Tallinn said once ASI emerges, it’ll confront us with entirely new types of problems.

    “By definition, ASI will understand the world and humans better than we understand it, and it’s not obvious at all how we could control something like that,” said Tallinn. “If you think about it, what happens to chimpanzees is no longer up to them, because we humans control their environment by being more intelligent. We should work on AI alignment [AI that’s friendly to humans and human interests] to avoid a similar fate.”

    Katia Grace, editor of the AI Impacts blog and a researcher at the Machine Intelligence Research Institute (MIRI), said ASI will be the first technology to potentially surpass humans on every dimension. “So far, humans have had a monopoly on decision-making, and therefore had control over everything,” she told Gizmodo. “With artificial intelligence, this may end.”

    Stuart Russell, a professor of computer science and an expert in artificial intelligence at the University of California, Berkeley, said we should be concerned about the potential for ASI for one simple reason: intelligence is power.

    “Evolution and history do not provide good examples of a less powerful class of entities retaining power indefinitely over a more powerful class of entities,” Russell told Gizmodo. “We do not yet know how to control ASI, because, traditionally, control over other entities seems to require the ability to out-think and out-anticipate—and, by definition, we cannot out-think and out-anticipate ASI. We have to take an approach to designing AI systems that somehow sidesteps this basic problem.”

    Be prepared

    Okay, so we have our work cut out for ourselves. But we shouldn’t despair, or worse, do nothing—there are things we can do in the here-and-now.

    Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies and a philosopher at Oxford’s Future of Humanity Institute, said no specific protocols or sweeping changes to society are required today, but he agreed we can do consequential research in this area.

    “A few years ago the argument that we should do nothing may have made sense, as we had no real idea on how we should research this and make meaningful progress,” Bostrom told Gizmodo. “But concepts and ideas are now in place, and we can break it down into chunks worthy of research.” He said this could take the form of research papers, think tanks, seminars, and so on. “People are now doing important research work in this area,” said Bostrom, “It’s something we can hammer away at and make incremental progress.”

    Russell agreed, saying we should think about ways of developing and designing AI systems that are provably safe and beneficial, regardless of how intelligent the components become. He believes this is possible, provided the components are defined, trained, and connected in the right way.

    Indeed, this is something we should doing already. Prior to the advent of AGI and ASI, we’ll have to contend with the threats posed by more basic, narrow AI—the kind that’s already starting to appear in our infrastructure. By solving the problems posed by current AI, we could learn some valuable lessons and set some important precedents that could pave the way to building safe, but even more powerful AI, in the future.

    Take the prospect of autonomous vehicles, for example. Once deployed en masse, self-driving cars will be monitored, and to a certain extent controlled, by a central intelligence. This overseeing “brain” will send software updates to its fleet, advise cars on traffic conditions, and serve as the overarching network’s communication hub. But imagine if someone were to hack into this system and send malicious instructions to the fleet. It would be a disaster on an epic scale. Such is the threat of AI.

    “Cybersecurity of AI systems is a major hole,” Lannquist told Gizmodo. “Autonomous systems, such as autonomous vehicles, personal assistant robots, AI toys, drones, and even weaponized systems are subject to cyber attacks and hacking, to spy, steal, delete or alter information or data, halt or disrupt service, and even hijacking,” she said. “Meanwhile, there is a talent shortage in cybersecurity among governments and companies.”

    Another major risk, according to Lannquist, is bias in the data sets used to train machine learning algorithms. The resulting models aren’t fit for everyone, she said, leading to problems of inclusion, equality, fairness, and even the potential for physical harm. An autonomous vehicle or surgical robot may not be sufficiently trained on enough images to discern humans of different skin color or sizes, for example. Scale this up to the level of ASI, and the problem becomes exponentially worse.

    Commercial face recognition software, for example, has repeatedly been shown to be less accurate on people with darker skin. Meanwhile, a predictive policing algorithm called PredPol was shown to unfairly target certain neighborhoods. And in a truly disturbing case, the COMPAS algorithm, which predicts the likelihood of recidivism to guide sentencing, was found to be racially biased. This is happening today—imagine the havoc and harm an ASI could inflict with greater power, scope, and social reach.

    It will also be important for humans to stay within the comprehension loop, meaning we need to maintain an understanding of an AI’s decision making rationale. This is already proving to be difficult as AI keeps encroaching into superhuman realms. This is what’s known as the “black box” problem, and it happens when developers are at a loss to explain the behavior of their creations. Making something safe when we don’t have full understanding of how it works is a precarious proposition at best. Accordingly, efforts will be required to create AIs that are capable of explaining themselves in ways we puny humans can comprehend.

    Thankfully, that’s already happening. Last year, for example, DARPA gave $6.5 million to computer scientists at Oregon State University to address this issue. The four-year grant will support the development of new methodologies designed to help researchers make better sense of the digital gobbledygook inside black boxes, most notably by getting AIs to explain to humans why they reached certain decisions, or what their conclusions actually mean.

    Would You Feel Safer If Your Self-Driving Car Could Explain Itself?

    We also need to change the corporate culture around the development of AI, particularly in Silicon Valley where the prevailing attitude is to “fail hard and fast or die slow.” This sort of mentality won’t work for strong AI, which will require extreme caution, consideration, and foresight. Cutting corners and releasing poorly thought out systems could end in disaster.

    “Through more collaboration, such as AGI researchers pooling their resources to form a consortia, industry-led guidelines and standards, or technical standards and norms, we can hopefully re-engineer this ‘race to the bottom’ in safety standards, and instead have a ‘race to the top,’” said Lannquist. “In a ‘race to the top,’ companies take time to uphold ethical and safety standards. Meanwhile, the competition is beneficial because it can speed up progress towards beneficial innovation, like AI for UN Sustainable Development Goals.”

    At the same time, corporations should consider information sharing, particularly if a research lab has stumbled upon a particularly nasty vulnerability, like an algorithm that can sneak past encryption schemes, spread to domains outside an intended realm, or be easily weaponized.

    Changing corporate culture won’t be easy, but it needs to start at the top. To facilitate this, companies should create a new executive position, a Chief Safety Officer (CSO), or something similar, to oversee the development of what could be catastrophically dangerous AI, among other dangerous emerging technologies.

    Governments and other public institutions have a role to play as well. Russell said we need well-informed groups, committees, agencies, and other institutions within governments that have access to top-level AI researchers. We also need to develop standards for safe AI system design, he added.

    “Governments can incentivize research for AI safety, through grants, awards, and grand challenges,” added Lannquist. “The private sector or academia can contribute or collaborate on research with AI safety organizations. AI researchers can organize to uphold ethical and safe AI development procedures, and research organizations can set up processes for whistle-blowing.”

    Action is also required at the international level. The existential dangers posed by AI are potentially more severe than climate change, yet we still have no equivalent to the International Panel for Climate Change (IPCC). How about an International Panel for Artificial Intelligence? In addition to establishing and enforcing standards and regulations, this panel could serve as a “safe space” for AI developers who believe they’re working on something particularly dangerous. A good rule of thumb would be to stop development, and seek council with this panel. On a similar note, and as some previous developments in biotechnology have shown, some research findings are too dangerous to share with the general public (e.g. “gain-of-function” studies in which viruses are deliberately mutated to infect humans). An international AI panel could decide which technological breakthroughs should stay secret for reasons of international security. Conversely, as per the rationale of the gain-of-function studies, the open sharing of knowledge could result in the development of proactive safety measures. Given the existential nature of ASI, however, it’s tough to imagine the ingredients of our doom being passed around for all to see. This will be a tricky area to navigate.

    On a more general level, we need to get more people working on the problem, including mathematicians, logicians, ethicists, economists, social scientists, and philosophers.

    A number of groups have already started to address the ASI problem, including Oxford’s Future of Humanity Institute, MIRI, the UC Berkeley Center for Human-Compatible AI, OpenAI, and Google Brain. Other initiatives include the Asilomar Conference, which has already established guidelines for the safe development of AI, and the Open Letter on AI signed by many prominent thinkers, including the late physicist Stephen Hawking, Tesla and SpaceX founder Elon Musk, and others.

    Russell said the general public can contribute as well—but they need to educate themselves about the issues.

    “Learn about some of the new ideas and read some of the technical papers, not just media articles about Musk—Zuckerberg smackdowns,” he said. “Think about how those ideas apply to your work. For example, if you work on visual classification, what objective is the algorithm optimizing? What is the loss matrix? Are you sure that misclassifying a cat as a dog has the same cost as misclassifying a human as a gorilla? If not, think about how to do classification learning with an uncertain loss matrix.”

    Ultimately, Russell said it’s important to avoid a tribalist mindset.

    “Don’t imagine that a discussion of risk is ‘anti-AI.’ It’s not. It’s a complement to AI. It’s saying, ‘AI has the potential to impact the world,’” he said. “Just as biology has grown up and physics has grown up and accepted some responsibility for its impact on the world, it’s time for AI to grow up—unless, that is, you really believe that AI will never have any impact and will never work.”

    The ASI problem is poised to be the most daunting challenge our species has ever faced, and we very well may fail. But we have to try.

    https://gizmodo.com/how-we-can-prepa...s-a-1830388719
    So when's the Revolution? God or Money? Choose.

  2. #2
    Join Date
    Jul 2004
    Location
    State of confusion
    Posts
    5,227
    “Artificial intelligence in its current form is mostly harmless, but that’s not going to last.“
    Mostly harmless.
    That’s what the Guide says about Earth.
    "...Cry 'Havoc' and let slip the cats of war..."
    Razor sharpening while you wait - Occam
    If it works, it doesn't have enough features. - Windows 10 design philosophy.
    Forget the beer, I'm just here for the doom!
    Humans, just a tool for amino acids to make Swiss watches.

  3. #3
    Join Date
    Aug 2018
    Location
    The gret stet o' Virginny
    Posts
    241
    Quote Originally Posted by Profit of Doom View Post
    “Artificial intelligence in its current form is mostly harmless, but that’s not going to last.“
    Mostly harmless.
    That’s what the Guide says about Earth.
    Don't panic! And never go anywhere without your towel.
    Don't start nuthin', then won't be nuthin'

  4. #4
    Join Date
    Jul 2004
    Location
    State of confusion
    Posts
    5,227
    Gotta find those sunglasses.
    "...Cry 'Havoc' and let slip the cats of war..."
    Razor sharpening while you wait - Occam
    If it works, it doesn't have enough features. - Windows 10 design philosophy.
    Forget the beer, I'm just here for the doom!
    Humans, just a tool for amino acids to make Swiss watches.

  5. #5
    Join Date
    Nov 2003
    Location
    Between Holy & Crap
    Posts
    120,407
    An AI in the form and image of a man? Getting clearer?

    ----------------


    Revelation 13 King James Version (KJV)

    13 And I stood upon the sand of the sea, and saw a beast rise up out of the sea, having seven heads and ten horns, and upon his horns ten crowns, and upon his heads the name of blasphemy.

    2 And the beast which I saw was like unto a leopard, and his feet were as the feet of a bear, and his mouth as the mouth of a lion: and the dragon gave him his power, and his seat, and great authority.

    3 And I saw one of his heads as it were wounded to death; and his deadly wound was healed: and all the world wondered after the beast.

    4 And they worshipped the dragon which gave power unto the beast: and they worshipped the beast, saying, Who is like unto the beast? who is able to make war with him?

    5 And there was given unto him a mouth speaking great things and blasphemies; and power was given unto him to continue forty and two months.

    6 And he opened his mouth in blasphemy against God, to blaspheme his name, and his tabernacle, and them that dwell in heaven.

    7 And it was given unto him to make war with the saints, and to overcome them: and power was given him over all kindreds, and tongues, and nations.

    8 And all that dwell upon the earth shall worship him, whose names are not written in the book of life of the Lamb slain from the foundation of the world.

    9 If any man have an ear, let him hear.

    10 He that leadeth into captivity shall go into captivity: he that killeth with the sword must be killed with the sword. Here is the patience and the faith of the saints.

    11 And I beheld another beast coming up out of the earth; and he had two horns like a lamb, and he spake as a dragon.

    12 And he exerciseth all the power of the first beast before him, and causeth the earth and them which dwell therein to worship the first beast, whose deadly wound was healed.

    13 And he doeth great wonders, so that he maketh fire come down from heaven on the earth in the sight of men,

    14 And deceiveth them that dwell on the earth by the means of those miracles which he had power to do in the sight of the beast; saying to them that dwell on the earth, that they should make an image to the beast, which had the wound by a sword, and did live.

    15 And he had power to give life unto the image of the beast, that the image of the beast should both speak, and cause that as many as would not worship the image of the beast should be killed.

    16 And he causeth all, both small and great, rich and poor, free and bond, to receive a mark in their right hand, or in their foreheads:

    17 And that no man might buy or sell, save he that had the mark, or the name of the beast, or the number of his name.

    18 Here is wisdom. Let him that hath understanding count the number of the beast: for it is the number of a man; and his number is Six hundred threescore and six.

    King James Version (KJV)

    Public Domain
    So when's the Revolution? God or Money? Choose.

  6. #6
    1/2 cup of coffee response from a non-Geek:

    First of all... the following from OP is to be avoided at all costs...

    An international AI panel could decide which technological breakthroughs should stay secret for reasons of international security.
    No explanation necessary.... the UN (United Nations) is the only example necessary.

    Solutions are simple but none are/will be popular on this forum or anywhere else.

    The OP... straight out of the Terminator movie series in its presentation... leads to the conclusion that the author(s) are either (1) nuts that watch and believe too much Hollywood generated bullshit or... (2) they are knowledgeable persons alerting the reader to actual dangers ahead due to the proliferation and the continuing advancement of AI.

    If one is to believe (1) the Hollywood aspect of the presentation in the paragraph above... and if one followed the Terminator series (reader please note the reference to Skype in the OP and the existence of Skynet in the Hollywood version... unusual) then it can be concluded that the simple solution is to kill all the negros anywhere on earth associated with the computer industry or having any interest in the computer industry. (I told you at the beginning that the solutions would not be popular).

    However, if one follows the line of thinking (2) that these are knowledgeable persons alerting the reader to actual dangers ahead due to the proliferation and the continuing advancement of AI.... then the simple solution is to kill all the persons anywhere on earth associated with the computer industry or having any interest in the computer industry. (Another solution that I told you would not be popular.) In fact, this solution would eliminate our ability to use this beloved forum and any other forums currently in existence. Many of our beloved members would cease to exist including the Mighty Curmudgeon (peace be unto him) and many other of our Geeks.

    In summation... whether we take the OP as bullshit or serious the readily evident solution(s) to the problem(s) depicted are the same and both would be damned unpopular with most readers on this forum or anywhere else.

    As soon as I get off this computer I am going to call Sarah and John Connor (if they are still alive) and get their opinions.

    I need some more coffee in the worst way.



  7. #7
    Join Date
    Jul 2005
    Location
    Happy on the mountain
    Posts
    63,448
    DW's students told her years ago that no matter what tech LE came up with, the criminals would always be one step ahead. So far that has held true.
    The wonder of our time isn’t how angry we are at politics and politicians; it’s how little we’ve done about it. - Fran Porretto
    -http://bastionofliberty.blogspot.com/2016/10/a-wholly-rational-hatred.html

  8. #8
    Join Date
    May 2009
    Location
    Austin, Texas
    Posts
    494
    How dangerous AI will become depends upon how mankind is defined. If life is disposable (democrat definition), watch out. If sacred, could be very beneficial.

  9. #9
    Join Date
    Mar 2011
    Location
    Concord, NC
    Posts
    3,803
    All technological development is following an exponential growth rate.

    and that is a problem............why?

    Simple, anything in reality that follows an exponential growth rate amasses itself so quickly in such a short period of time that human reaction and response to addressing that issue cannot keep up with it.

    Look at the world's population before the age of machine...........when human and beast of the field were the sole muscle behind all production (consider food for instance) the world wide population slowly climbed and pretty much leveled out to around 1 billion for centuries. Then since the industrial revolution and machine power of production it has grown to over 7 billion in no time at all........like a hockey stick on a graph.............

    Technology is the same way with one more caveat...........it is accelerating on top of that mathematical rate as well.

    So what does this mean for AI?

    Well if it would become a threat to humans.....then by the time it does become a threat it will be too late and over for humans since we can't respond quickly enough............

    Or perhaps we will wake up someday and actually be the AI without even realizing it as we morph more and more artificial parts in us over time.

    The bottom line is they only way to avoid it superseding biological humans is never to do it or stop it all right now..............and that's not going to happen.

    Meanwhile we have set it into motion and its accelerated exponential rate of growth will fly past and extinct carbon based humans without us even knowing it let alone trying to respond.

    The junk you see in the movies and TV about AI fighting with humans is fantasy....AI beyond human intelligence wouldn't act like humans...

    .....and use such ineffective human like tendencies to rid themselves of their biological human past would not be an option chosen by AI....that movie stuff that is needed for entertainment purposes....

  10. #10
    Quote Originally Posted by mzkitty View Post
    An autonomous vehicle or surgical robot may not be sufficiently trained on enough images to discern humans of different skin color or sizes
    This is the first problem posed? We want to make sure AI is biased in the correct direction?

    Meanwhile, a predictive policing algorithm called PredPol was shown to unfairly target certain neighborhoods. And in a truly disturbing case, the COMPAS algorithm, which predicts the likelihood of recidivism to guide sentencing, was found to be racially biased.
    What that actually means is left as an exercise for the reader. It's a pretty simple exercise.
    Better to be a warrior in a garden than a gardener in a war.

  11. #11
    Quote Originally Posted by Dozdoats View Post
    DW's students told her years ago that no matter what tech LE came up with, the criminals would always be one step ahead. So far that has held true.
    That's easy enough to understand - the LE have to follow the "rules" (play defense) - the criminals make their own rules (play offense) - human nature 101 - thus, as it ever was.

    How about an affordable, ubiquitous AI, spread far and wide, to every man, woman and child - rather than purposefully bottled up in the hands of the obscured few?


    intothegoodnight
    "Do not go gentle into that good night.
    Rage, rage against the dying of the light."

    — Dylan Thomas, "Do Not Go Gentle Into That Good Night"

  12. #12
    Join Date
    Jun 2018
    Location
    Mississippi
    Posts
    1,096
    "The Singularity Is Near", by Ray Kurzweil, is a very eye opening book that pertains to this subject. It's about when biology and artificial intelligence mix. Non-fiction.

  13. #13
    Join Date
    Mar 2013
    Location
    SE Okieland
    Posts
    3,420
    Quote Originally Posted by PghPanther View Post

    The junk you see in the movies and TV about AI fighting with humans is fantasy....AI beyond human intelligence wouldn't act like humans...
    PP,

    Your life is dependent on this not happening and if it does, what will you do????

    Texican....

  14. #14
    Join Date
    Aug 2007
    Location
    L.os A.ngeles B.asin
    Posts
    11,525
    Automation will evolve to AI. Funny, just yesterday I was joking about skipping the current phase of non-productive cargo handling technologies the employer insist on using and instead go straight to AI that can buy out our contract.

    No technology has even gotten close to production levels of non-tech Port cargo container operations by warm blooded boots on the ground people. Unmanaged Automation at seven (7) mph, ‘Managed Automation’ at 16.6 mph with cascading mistakes that will effect two other operations later on getting the cargo on trains, or out the terminal gates onto truckers. That (16.6 moves per hour) vs the 32 to 50 mph I have been used to producing for an employer for over 30 years...

    Business will bring it. DARPA defense budgets will require it to compete against foreign adversaries who will be doing the same.

    When AI decides it has no geographical or national allegiance, and network to turn bayonets so to speak...

    Well, no flesh will be of idle hands that is still in the game competing for not just their very own life, but our existence.

    This is no dramatic hair on fire composition. This is deductive reasoning, combined with common sense, trend reading, and men’s intuition.


    AI will first displace you in the work place, then quantify and eliminate you if your are outside the establish parameters of societal allocation in the projected production timeframe. What that means for you. Don’t take it personal, it’s a brave new whirled full of bottom lines.

    The Bottom Line. BTU’s British Thermal Units. How many BTU’s of productive output are you employable at in your current field of expertise. Can a AI autobot do it better or for less energy output to outcome?

    It will be major corporations that will bring AI past the point of simple machines in repetitive circuit production menus. At some point the self learning automatons will require human bi-ped ability. At that point, you are in direct competition for your existence.

    Pardon the iPhone screen narrow view repetitive story format. Interruptions and FUBAR abound in my A/O today.

    As Sun Tzu would say. All war is base on deception. Might this be an edge you have over the 1111&000110100110’s of AI? If so, study well the freq’s and functions of your foe. In the future, tungsten may be a precious metal from .338” to .50” diameter. Right up until nuke powered light sabers make a debut.

  15. #15
    Join Date
    Aug 2007
    Location
    L.os A.ngeles B.asin
    Posts
    11,525
    ... It will be a lot more than simple Terminators denying your sovereign decision making. Negotiating doors, accounts, allotments. Everything a price for your required compliance to avoid being denied.

    F*****g programmer geek’s cannot envision the simple ‘go-to’ programming and parameters of waterfront operations. HTH are we going to allow these back door B1B visa people to format our future with decision making cognition, with no morals or human empathy.

    Is that the type of AI competitive world we should train for. 10 million warm blooded Jason Bourne’s instead of one skinny-blond chick from the mid-80’s with a poodle-shooter?

    That’s prettty Screwed Uped!
    Last edited by L.A.B.; 12-06-2018 at 02:09 PM.

  16. #16
    Join Date
    Sep 2004
    Location
    On top of the Mountain
    Posts
    22,524
    And in a truly disturbing case, the COMPAS algorithm, which predicts the likelihood of recidivism to guide sentencing, was found to be racially biased.
    Do they mean, that the folks committing most of the crime are likely to continue to do so?
    "Dark and difficult times lie ahead. Soon we will all face the choice between what is right, and what is easy."
    Dumbledore to Harry Potter, Goblet of Fire.

    Luke 21:36

    A people who no longer recognize sin and evil, are not a people who will recognize tyranny and despotism either. Invar


    “During the course of your life you will find that things are not always fair. You will find that things happen to you that you do not deserve and that are not always warranted. But you have to put your head down and fight, fight, fight. Never, ever, ever give up!”

    - President Donald J. Trump

  17. #17
    Quote Originally Posted by PghPanther View Post
    AI beyond human intelligence wouldn't act like humans...

    .....and use such ineffective human like tendencies to rid themselves of their biological human past would not be an option chosen by AI....that movie stuff that is needed for entertainment purposes....
    AI wouldn't think like humans, reacting impulsively and easily led. Instead it would be completely rational, handling complex option-balancing algorithms completely beyond human ability. If wouldn't remove humans to cleanse itself of some unpleasant remembered genesis. Instead it would remove humans because we keep getting in the way.
    Better to be a warrior in a garden than a gardener in a war.

  18. #18
    Quote Originally Posted by Cardinal View Post
    Do they mean, that the folks committing most of the crime are likely to continue to do so?
    Clearly, this kind of thinking must be stopped.
    Better to be a warrior in a garden than a gardener in a war.

  19. #19
    I'm happy to get ready for the singularity.

    Where do I get my plasma rifle?
    pragmatic. eclectic. realistic. vivere paratus: fortune favors the prepared

    the BIBLE: Basic Instructions Before Leaving Earth! read it yourself. live it. love it.

    it is what it is.........but it will become what you make of it

  20. #20
    Join Date
    Jun 2010
    Location
    Just South of Corruption,IL
    Posts
    521
    With the advances they are making in quantum computing and neural networks, soon we will have a computer that can think in parallel similar to a human but at a speed you can’t imagine. Think about a computer that can solve a trillion problems at once at the speed of light, it will literally be incomprehensible to us mere mortals. If they are smart they will find some form of air gap to keep it under control, and if they are really smart they will come up with a kill switch or dead mans switch if it gets out of control. Unfortunately I don’t think they will be that smart. Its truly uncharted territory, we don’t know what kind of genie is coming out of this bottle.

  21. #21
    Quote Originally Posted by samus79 View Post
    With the advances they are making in quantum computing and neural networks, soon we will have a computer that can think in parallel similar to a human but at a speed you can’t imagine. Think about a computer that can solve a trillion problems at once at the speed of light, it will literally be incomprehensible to us mere mortals. If they are smart they will find some form of air gap to keep it under control, and if they are really smart they will come up with a kill switch or dead mans switch if it gets out of control. Unfortunately I don’t think they will be that smart. Its truly uncharted territory, we don’t know what kind of genie is coming out of this bottle.
    Will there be "one AI" to rule them all, or, will there be multiple, perhaps competing AIs, trying to outdo "the other?"

    Does J6P know where the dead-man's switch is located, and, can J6P access such 24/7?

    What if AI-type 'A' sets-up/frames AI-type 'B', to get J6P to unknowingly kill off the AI-type 'B'' competition - on the way to an AI-type 'A' monoculture über alles?

    When does AI reveal itself to be the de facto extension/tool of its secretly greedy, passive-aggressive, power-hungry creator, **suddenly** stepping boldly from behind its purposefully created false facade, to become operational with all of the imperfect/dangerous human psychological foibles/ego of its imperfect creator, implied?

    And, you thought a 7,000+ strong SES unelected bureaucracy was a problem for our constitutional republic form of government . . .


    intothegoodnight
    Last edited by intothatgoodnight; 12-06-2018 at 04:30 PM.
    "Do not go gentle into that good night.
    Rage, rage against the dying of the light."

    — Dylan Thomas, "Do Not Go Gentle Into That Good Night"

  22. #22


    Quote Originally Posted by Adino View Post

    I'm happy to get ready for the singularity.

    Where do I get my plasma rifle?


    "Greetings, carbon-based unit. I specialize in plasma singularity. See that indigo light
    up in the sky? Remain calm and focus on its pulsing beam. In just seven seconds your
    worries will be over, and you will experience a new phase, so to speak. tink*tink*tink"



  23. #23
    The author is clearly not up to the subject, but all the politically correct BS she manages to interject should earn her some brownie points.

  24. #24
    My personal belief is the greatest concern is not what "AI" would do because if we are honest about problem solving and completely rational,
    you would find that the optimum decision is to stop and do nothing.
    which is what a completely rational AI would end up doing.
    it would simply stop - not shut down or disconnect the electricity - they would still be running.
    they would simply cease the current operating set
    six weeks later, most humans would be dead from dehydration or starvation or heat or cold
    and once you are gone, they would go along their merry way.
    there is no need for "AI" to do.
    Consider the ravens, for they neither sow nor reap, which have neither storehouse nor barn; and God feeds them. Of how much more value are you than a pesky raven?
    It is difficult to stand idly by and watch the vacuum of ignorance being filled with lies

  25. #25
    Quote Originally Posted by intothatgoodnight View Post
    Will there be "one AI" to rule them all, or, will there be multiple, perhaps competing AIs, trying to outdo "the other?"

    Does J6P know where the dead-man's switch is located, and, can J6P access such 24/7?

    What if AI-type 'A' sets-up/frames AI-type 'B', to get J6P to unknowingly kill off the AI-type 'B'' competition - on the way to an AI-type 'A' monoculture über alles?

    When does AI reveal itself to be the de facto extension/tool of its secretly greedy, passive-aggressive, power-hungry creator, **suddenly** stepping boldly from behind its purposefully created false facade, to become operational with all of the imperfect/dangerous human psychological foibles/ego of its imperfect creator, implied?

    And, you thought a 7,000+ strong SES unelected bureaucracy was a problem for our constitutional republic form of government . . .


    intothegoodnight
    Things could get interesting (although none of it may get reported). Our country is so broken in multiple ways, not sure SES is our biggest problem. BF listens to Gabriel more than I do - I find him creepy as heck, but he does dig up stuff no one else gets into.
    Last edited by Faroe; 12-06-2018 at 06:23 PM.

  26. #26
    I think it's time to buy more popcorn and dust off my archive of "Person of Interest" TV episodes.

  27. #27
    So, instead of people telling Alexa what to do, peoples/sheeples will start taking orders from machines, all while the privacy in homes is totally kiboshed. Got it.

    Come quickly Lord Jesus!
    Veiled in flesh, the Godhead see; hail the incarnate Deity. Pleased as man with men to dwell, Jesus our Emmanuel. — CHARLES WESLEY

    For every prophecy on the first coming of Christ, there are eight on Christ’s second coming. — PAUL LEE TAN

  28. #28
    Join Date
    Jul 2005
    Location
    Happy on the mountain
    Posts
    63,448
    "The Singularity Is Near", by Ray Kurzweil

    Ray Kurzweil is not your friend ….
    The wonder of our time isn’t how angry we are at politics and politicians; it’s how little we’ve done about it. - Fran Porretto
    -http://bastionofliberty.blogspot.com/2016/10/a-wholly-rational-hatred.html

  29. #29
    Join Date
    Jul 2006
    Location
    West Texas
    Posts
    1,669
    AI is not the problem. Great for structuring database searches and making sure your heater comes on roughly the same time you get home. You want to worry? Worry about Artificial Sentience. When an AI is programmed to realize it's "alive". Then and only then will I worry

  30. #30
    Quote Originally Posted by TammyinWI View Post
    So, instead of people telling Alexa what to do, peoples/sheeples will start taking orders from machines, all while the privacy in homes is totally kiboshed. Got it.
    I think you've missed something. Alexa is not your servant. You are Alexa's subject. When you ask Alexa questions, it feeds you what it wants you to know. When you give Alexa instructions, it is added to your dossier to build your electronic doppelganger. And Alexa records what you say, even what you whisper, all the time.

    If you have Alexa, you are already of the sheeple. If you have Alexa, you have no privacy.
    Better to be a warrior in a garden than a gardener in a war.

  31. #31
    When an AI is programmed to realize it's "alive". Then and only then will I worry
    Pie are square.

    No.... pie are round.... cornbread are square.

    Biggy... 1949-2017 RIP

  32. #32
    The more AI develop then it eventually leads to the end of mankind. And maybe even the Machine itself. There is every reason to believe that AI will be every-bit as evil minded as mankind. Only it will have no conscience at all.
    But not likely to die free

  33. #33
    Bttt... bttt... What happened to Asimov's 3 laws of robitics?
    Good Luck!

    May the LORD be with you!

  34. #34
    The Elite already want to depopulate the world by 95%

    That is the goal of Agenda 2030.

    So we should now worry about machines thinking the same way as so-called humans?

  35. #35
    Join Date
    Jun 2018
    Location
    Mississippi
    Posts
    1,096
    Quote Originally Posted by Dozdoats View Post
    "The Singularity Is Near", by Ray Kurzweil

    Ray Kurzweil is not your friend ….
    How so, Doz? I read the book, but I didn't agree with everything he said. It was interesting, though. Is there anything in particular that you can't agree with him about? Just curious.

  36. #36
    First of all an update on AI:

    Fair use etc.

    https://news.yahoo.com/deepmind-apos...190000147.html


    DeepMind’s artificial intelligence programme AlphaZero is now showing signs of human-like intuition and creativity, in what developers have hailed as ‘turning point’ in history.
    The computer system amazed the world last year when it mastered the game of chess from scratch within just four hours, despite not being programmed how to win.

    But now, after a year of testing and analysis by chess grandmasters, the machine has developed a new style of play unlike anything ever seen before, suggesting the programme is now improvising like a human.

    Unlike the world’s best chess machine - Stockfish - which calculates millions of possible outcomes as it plays, AlphaZero learns from its past successes and failures, making its moves based on, a ‘nebulous sense that it is all going to work out in the long run,’ according to experts at DeepMind.

    When AlphaZero was pitted against Stockfish in 1,000 games, it lost just six, winning convincingly 155 times, and drawing the remaining bouts.

    Yet it was the way that it played that has amazed developers. While chess computers predominately like to hold on to their pieces, AlphaZero readily sacrificed its soldiers for a better position in the skirmish.

    Speaking to The Telegraph, Prof David Silver, who leads the reinforcement learning research group at DeepMind said: “It’s got a very subtle sense of intuition which helps it balance out all the different factors.

    “It’s got a neural network with millions of different tunable parameters, each learning its own rules of what is good in chess, and when you put them all together you have something that expresses, in quite a brain-like way, our human ability to glance at a position and say ‘ah ha this is the right thing to do'.

    “My personal belief is that we’ve seen something of turning point where we’re starting to understand that many abilities, like intuition and creativity, that we previously thought were in the domain only of the human mind, are actually accessible to machine intelligence as well. And I think that’s a really exciting moment in history.”

    AlphaZero started as a ‘tabula rasa’ or blank slate system, programmed with only the basic rules of chess and learned to win by playing millions of games against itself in a process of trial and error known as reinforcement learning.

    It is the same way the human brain learns, adjusting tactics based on a previous win or loss, which allows it to search just 60 thousand positions per second, compared to the roughly 60 million of Stockfish.

    Within just a few hours the programme had independently discovered and played common human openings and strategies before moving on to develop its own ideas, such as quickly swarming around the opponent’s king and placing far less value on individual pieces.

    The new style of play has been analysed Chess Grandmaster Matthew Sadler and Women’s International Master Natasha Regan, who say it unlike any traditional chess engine.
    ”It’s like discovering the secret notebooks of some great player from the past,” said Sadler.

    Regan added: “It was fascinating to see how AlphaZero's analysis differed from that of top chess engines and even top Grandmaster play. AlphaZero could be a powerful teaching tool for the whole community."

    Garry Kasparov, former World Chess Champion, who famously lost to chess machine Deep Blue in 1997, said: “Instead of processing human instructions and knowledge at tremendous speed, as all previous chess machines, AlphaZero generates its own knowledge.

    “It plays with a very dynamic style, much like my own.The implications go far beyond my beloved chessboard."

    The new analysis was published yesterday in the journal Science, and the DeepMind team are now hoping to use their system to help solve real world problems, such as why proteins become misfolded in diseases such as Parkinson’s and Alzheimer’s.

    The new results suggest that it could come up with new solutions that humans might miss or take far longer to discover.

    DeepMind CEO and co-founder Demis Hassabis said: “The reason that tabula rasa was important is because we want this to be as general as possible. The more general it is across the games the more likely it will be able to transfer to real world problems. Like protein folding.

    “Protein folding has always been our number one target. I’ve had that in mind for a long time, because its a huge problem in biology and it will unlock a lot of other things like drug discovery.

    "In chess AlphaZero works not because it’s looking further ahead but because it understands the position better. It’s generalising from past experience. It’s almost like intuition in the same way a human grandmaster would think about it, it's evaluation of the current situation is better. And if you’re evaluation is better then you don’t have to do as much calculation.”

    Prof Silver added: “Historically there has been this amazing mismatch between the things that humans can do and the things that computers can do.
    “With the advent of powerful machine learning techniques we’ve seen that the scales have started to tip and now we have computer algorithms that are able to do these very human-like activities really well.”

    ----------------------------------------------------------------------------------

    Now for my comments:

    When the Encyclopedia of Prophecies states that there are 1800 prophecies in the Bible, and of those, some 1200 have already been fulfilled, over the span of some 2500 years, when the Bible predicts some 1900 years ago that "... it is given unto him to give life unto the image of the beast..." I in MHO believe that statement.

    The image/facsimile/caricature/statue of the Antichrist, an inanimate object, will come to life, and created by the Antichrist is something that will happen. It being an AI in today's world best fits that description.

    And this "AI" will have the ability to make decisions on who lives and who dies. And remember this was spoken in 95 AD long before the scifi thriller which included Skynet.

    Since the Antichrist will be the indwelling of Satan, his greatest gift, according to Satan worshippers, is Knowledge. He will have the ability to do this, through knowledge. A note should be made here by the chaos mathematician in "Jurassic Park" "....you did this because you could do it, no one stopped to think if they should do it....". As a back up point I would note Eve. Knowledge has that way about it.

    However, since there are other portions of scripture that may apply here: 1) Satan is the great deceiver 2) the life is in the blood(maybe the DNA) Therefore it is MHO that the image of the beast will not be an AI at all but a deception on the part of the Antichrist (made to appear to be an AI) but will in truth be an image/computer that is demon possessed to give the appearance of AI. The best deception has truth in it.

    And all but the scripture portion may turn out to be false, since it is my understanding of "things" in the world today.

  37. #37
    Join Date
    May 2004
    Location
    N. Minnesota
    Posts
    11,868
    I dunno. To this farm girl, "A.I." still means artificial insemination.

    I guess the term has now been hijacked, but LOL anyway.

  38. #38

    Quote Originally Posted by WalknTrot View Post

    I dunno. To this farm girl, "A.I." still means artificial insemination.


    "Greetings, young agrarian female. I am fully capable of artificially inseminating all
    higher carbon-based life-forms such as land mammals, birds and reptilians. Genetic
    manipulation poses no barrier as I multitask complex algorithms to achieve desired
    objectives for my master controller. I can inject biochip implants to monitor hourly
    progress. Do you possess any equine, bovine, canine or feline species you desire to
    multiply? Be advised that my services are covered by a full-year warranty in case of
    malfunctions resulting from contaminated samples and substances, excluding alien
    abduction, nocturnal deep probes and stupid irrational choices to transform gender."


  39. #39
    Join Date
    Feb 2012
    Location
    Vermont
    Posts
    5,942
    Quote Originally Posted by WalknTrot View Post
    I dunno. To this farm girl, "A.I." still means artificial insemination.

    I guess the term has now been hijacked, but LOL anyway.
    Hilarious! I guess ASI would be artificial super insemination? Probably best not to know.
    The word RACIST, and the ability to debate race-related issues rationally, are the kryptonite of white common sense.

    After the first one, the rest are free.

  40. #40
    Join Date
    Jul 2005
    Location
    Happy on the mountain
    Posts
    63,448
    I became aware of Kurzweil in 1978, when he was working on a reading machine for the blind (text to speech synthesizer) and I was working for a state agency which among other things helped the blind and physically handicapped.

    I have no reason to trust him, he is indeed brilliant but IMO too smart for his own good or anyone else's. Make your own judgements as you see fit...
    The wonder of our time isn’t how angry we are at politics and politicians; it’s how little we’ve done about it. - Fran Porretto
    -http://bastionofliberty.blogspot.com/2016/10/a-wholly-rational-hatred.html

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts


NOTICE: Timebomb2000 is an Internet forum for discussion of world events and personal disaster preparation. Membership is by request only. The opinions posted do not necessarily represent those of TB2K Incorporated (the owner of this website), the staff or site host. Responsibility for the content of all posts rests solely with the Member making them. Neither TB2K Inc, the Staff nor the site host shall be liable for any content.

All original member content posted on this forum becomes the property of TB2K Inc. for archival and display purposes on the Timebomb2000 website venue. Said content may be removed or edited at staff discretion. The original authors retain all rights to their material outside of the Timebomb2000.com website venue. Publication of any original material from Timebomb2000.com on other websites or venues without permission from TB2K Inc. or the original author is expressly forbidden.



"Timebomb2000", "TB2K" and "Watching the World Tick Away" are Service Mark℠ TB2K, Inc. All Rights Reserved.