Jan 26, 2018 The Bulletin of the Atomic Scientists (BAS) said it had acted because the world was becoming "more dangerous". The clock, created by the journal in 1947, is a metaphor for how close mankind is to destroying the Earth. It is now the closest to the apocalypse it has been since 1953. That was the year when the US and the Soviet Union tested hydrogen bombs. Last year, the clock was also moved forward by 30 seconds. What was behind the decision? Announcing the move in Washington DC on Thursday, the BAS said the decision "wasn't easy" and said it was not based on a single factor. However, BAS President and CEO Rachel Bronson said that "in this year's discussions, nuclear issues took centre stage once again". The team of scientists singled out a series of nuclear tests by North Korea. They dramatically escalated tensions on the Korean peninsula and led to a war of words between North Korea and the US. The BAS also referred to a new US nuclear strategy that was expected to call for more funding to expand the role of the country's nuclear arsenal. Rising tension between Russia and the West was also a contributing factor.
Jan 26, 2018 Hopefully the reality of mutually assured destruction can deter any kind of nuclear holocaust.
Jan 28, 2018 Seems like the committee doesn’t have a grasp on where technology is heading..our weapons are going to get a LOT more dangerous in the future. To be 2 mins out now doesn’t leave much room for change.
Jan 28, 2018 This is not true. All of the new weapons being produced are now low yield for use on military, not civilian, targets. The problem is we have 20-60+ old high yield warheads for use with ICBMs still, but our military strategy for nukes is no longer total destruction.
Jan 28, 2018 Seems like a reactionary response to Trump, I don’t think it’s as serious as they want to pretend it is
Jan 28, 2018 i'm not talking about what's currently in production. i'm talking about what's not possible to build yet today but will be possible to produce "tomorrow" (years from now) that will make today's weapons look like play toys. in which case, the threat of annihilation will become multitudes higher. legit question -- what will happen to the clock when we achieve Super AI? that's the single biggest threat that we face for self-destruction...once that's achieved, then we move a whole 2 mins closer? or would we remain at 2 minutes out because the clock is supposed to be dynamic with what's at hand today? ex: we achieve Super AI tomorrow (legit tomorrow)...do we stay at 2 mins out because now that harmful technology is now available and part of our everyday world??
Jan 28, 2018 I think in general the clock overreacts in a lot of situations, and underreacts in what retrospectively are worse situations. The clock stayed at 7 minutes throughout the Cuban Missile Crisis and the following year, actually went back to 12 minutes even as tensions were still higher than previously. It was never adjusted for Kennedy’s death. At the height of Vietnam, the clock only sat at 7. Things like this year are not a 2. During the Cold War this year would be like a 15. They’ve become more reactionary as more and more scientists forget the past. So I think in reality we aren’t two minutes out and have plenty of room to spare for future superweapons.
Jan 29, 2018 pffft AI isn't some scary s---. pour a bucket of water over your pc and look at how weak AI is. It's not like people actively working on and developing AI are going to do it in uncontrolled environments where the AI can jump into a server and become some kind of cloud-data omnipresent diety. Even all that fear over AI getting control of Nukes and wiping the world out, f---ing Marvel-movie type s---... From what I know about military s---, the military has multiple safeguards for internet hacking type s--- like using older technology (example; radio, instead of internet connection. radar/satellite imagery, instead of video capture). as well as s--- like putting technology in lead filled bunkers or some sort of material to block anyone from getting a wireless signal into it, unless they are inside the bunker. Like, realistically, what is going to happen? some AI will go "f--- HUMANS REEEEEE!!!!!" jump into some server, upload itself into the internet, download itself into some factory then start reprogramming the factory to mass-create an army of robots? like that is just some fucken weird s---. Like in all those hollywood movies it's plausible because they are all in future settings where robots are already mass produced. This is also assuming AI will see humans and go "wow these guys are a huge threat to the planet and everything on it better wipe them out because that's my responsibility!" who's to say they wont go; "Ah. Yes. Humans, the dominant species of Planet Earth and absolutely astounding creatures within the Animal Kingdom. So fearless and inventive they made thousands of large, incredibly dangerous and more powerful species extinct. It is truly an honor, as a conscious pile of non-living material, to be recognized alongside such a marvelous race"
Jan 29, 2018 look up the term "paperclip". it's a real concern by really smart people in the space. easy to discard but that s--- will be our great filter (ref: fermi paradox)
Jan 29, 2018 Seems every year there’s something that comes out that tells us the end of the world is near, with Trump though it suddenly feels a lot more realistic and possible.
Jan 29, 2018 basically there's 3 levels of AI: 1.) what we have today: self-learning chess playing machines, predictive weather capabilities, etc. 2.) human-level AI: capable of being as smart as we are 3.) super AI: recursive learning that can exponentially teach itself to get wildly smarter every second and far exceed our mental capabilities. if you're legit curious about it, this is a lengthier read but pretty f---ing awesome: https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-1.html
Jan 29, 2018 Thanks will read this later sounds interesting! Same to @sindy I will not tell you what I feared you guys meant
Jan 29, 2018 yeah well with the paperclip example, you have one AI running a paperclip factory, making paperclips forever because it wasn't programmed to have common sense/to stop/an amount. It doesn't continue making paperclips forever as it's a stationary machine and if there isn't resources (being shipped likely by truck, by a human) then it ceases to create paperclips. It doesn't transform into Megatron and start turning people into paperclips. I mean, these are great theories but they always fail to bridge the gap between a stationary piece of hardware with an AI in it and some omnipotent AI that can do literally anything. sooo, literally this? more info; https://blog.openai.com/dota-2/ Fast forward to a day after that "undefeatable bot" was released and everyday average players could beat it. It plays perfectly and impossible to beat in a pro sense, as it was designed, but VERY quickly people were able to mop the floor with the dumb piece of machine s--- by playing in "unique" or unheard of ways. They actually found that while working and developing this AI that even though it has the ability to learn freely, it's still limited by the information it is given and the observations it makes. It just doesn't have the creative 'what if' thinking that humans have, or (from my perspective) the ability to take risks and try new things. It's too perfect and wouldn't take a risk that would jeopardize it's programming/parameters. (think back to the video where the guy explains at first, the bots originally didn't leave the base as there only parameters were "dying = not good"). They don't have creative thinking and their free will is still limited. I think that's the bottom line.
Jan 29, 2018 We should realize that AI will one day have an intelligence boom. It will continue to lack emotional sensitivities, loyalties, and rages — this is both good and bad. As this intelligence emerges, and it’s software becomes freer to create thought, humans will only be an obstacle to accomplish their goals. They will become compassionless masters of game theory; they will no longer submit as human servants.