Level1 News August 1 2017: Reduce, Reuse, Reeeeeeeee! | Level One Techs

The title of this one is hilarious!!! I laughed so loud when I first read it!!! Thanks @wendell, @kreestuh, and Grizzle!

I donā€™t think you should discount terminators. There are already killer drones and robots. They are becoming more autonomous every day. People have actually been killed by autonomous drones. Besides, remember Petman? I think it;s naive to think that something like a terminator isnā€™t going to exist when we are headed that direction with those specific types of technologies.

https://www.arl.army.mil/www/default.cfm?page=3050

Keep in mind, terminators werenā€™t exactly human level intelligent. The more stupid types of AI are probably more likely to be destructive. They are also more likely to be beatable. Itā€™s the superintelligent ones that are the candidates for extinction and existential risk. Itā€™s the Skynet that is the concern.

Skynet seems a bit unlikely though. The first AGI will need to be taught. The hardware and software are going to be a framework for self-organization. It will learn how to survive and behave in specific environments. That is the general architecture of AGI.

The specific components of AGI gelling and then becoming self-aware isnā€™t very likely. The specific components need to be synergistically compatible in order to function in a unified manner. It probably would also require some overarching data analysis, goal system and attention allocation architecture as well. This is what those who are working on AGI are employing. Itā€™s much more likely that those who are actively working on creating thinking machines specifically will be the first to accomplish it.

Noe of the big tech companies have near term plans for working on AGI. The AI researchers at Google specifically think its a couple of decades too early to invest in it. The Technical Director Ray Kurzweil is looking into brain scanning technologies because brain emulation is probably most likely to work. It however involves all of the human baggage as well. This could take a great deal of time to sort out. I think the current AGI projects are ahead of the game; but take that with a bucket of salt.

I doubt that a big tech company will create AGI. Academia has the inside track.

I was more talking about people who live in fear of terminators at their doorstep. But it still fits in with the argument, which is at what level are going to be afraid of AI.

I am no expert on the field, but it seems to be all the AIs we have need to be taught. They arenā€™t really true AIs, just better genetic algorithyms.

You mention parts of an AGI (had to google the term, I assume itā€™s not Adjusted gross Income) needing to gel and synergise. I wouldnā€™t be concerned if this didnā€™t already happen with chatbots!
Is it going to make something self aware? I would guess not until after it has happenedā€¦

Brain scanning is probably the best field to invest in. Easiest way to make something familiar, rather than putting stuff in an evolutionary test tube and trying to figure out what horrors come out.

Again, Academia is what we know about. There has to be some kind of superintelligant (beaver) AI arms race going on, like there is with quantum computing. Some cold wars never stop it seems.

Weā€™re gonna be afraid of AI as soon as it realizes how humans have been treating robots and begins to retaliate.


Timestamped to 1:22, als be sure to check out 2:03.

Right now itā€™s still walking outside to get some fresh air when things get too much, but wait until it becomes self-aware ā€¦

The usual way of saying Uighur is (wee-GUR or maybe WEE-gur), which doesnā€™t necessarily mean thatā€™s how it actually sounds in Chineseā€¦

The Earth produced intelligence over a period of about 4 billion years. This could theoretically happen with chatbots; through a similar trial and error type development. It wouldnā€™t take near as long; but it would take much longer than trying a synergistic, modular architecture. An organizing overarching model is probably required. The intention of creating Artificial General Intelligence is a direct influence on the initial model; that is likely to speed up the process.

Though brain emulation is highly likely to work, there is a great deal of human impulse and behavior that would not be necessary for AGI. Understanding how the brain works through scans, for the purpose of emulating it in silicon means also distinguishing the core from the baggage. It would probably produce more information that is not necessary than information that is. The lower brain produces constant impulses that are evaluated by the frontal lobes. much of those impulses are not even relevant to computational systems. There would still be a great deal of data analysis and testing required after fine grained scans of about 100 per second, that could accurately show the concentrations of molecules in nano-slices. This is the information that Ray Kurzweil is probably handing to the AI developers at Google. It seems like a sure thing; but it also seems like a lot more work than beginning with a base line cognitive architecture; and upgrading as needed.

Google is of course investing in brain scanning; and I agree that itā€™s a good technology to invest in. Itā€™s not just because of itā€™s usefulness for AI research either. The medical and psychological implications weigh heavily on itā€™s importance as well. Itā€™s definitely a good cause across the board.

There is a bit of an arms race with deep learning technologies like Tensor Flow. These are targeted technologies that arenā€™t really meant to produce more generally intelligent machines. General intelligence is more of an evolutionary ringer that can survive just about anywhere. That sort of intelligence is one that can generalize between databases by translating knowledge through similarities. Itā€™s something that would be capable of deciphering metaphor, symbolism and mathematical abstraction. It would have internal modeling capabilities that are at least similar to that of humans. It would also have introspective awareness and be capable of epistemological consideration. That pretty much means that it would be able to judge itā€™s ability to solve problems and account for itā€™s own shortcomings. Itā€™s intended to be a system that actually does meta-analysis by definition. So yeah, itā€™s an extremely hard problem.

From time to time @wendell points out that certain AI technologies are not going to produce consciousness. This is probably something like what heā€™s getting at. The difference between AI and AGI is general intelligence like humans have.

1 Like

One Swedish perspective on the Swedish data leak scandal:

This is a consequence of outsourcing government data to the private companies, incompetently. It seems that the private companies in question have not been contractually gagged by the responsible people (ultimately, politicians) to keep the data within the country, and no one checked on the handling of data, or those people either way. Perhaps a more competent approach is possible. The Swedish counterpart to intelligence agencies are proposing state-owned security servers. This is of course being countered by the private companies in love with the tax money, and a relatively low risk of failing to deliver - you failed to deliver? well weā€™ll just give you a while longer so you can fix it. Which is more money, more contract.

Ordering the IT system, specifications, and delivery are ALL handled by consultants. There has been no competence with the politicians for a while now in many different areas of politics, and made quite obvious with this IT-related scandal. However, this leak has been discovered and reported by the intelligence agency sector. Hopefully, we will get people in politics who can think appropriately to their task, and not appropriately to people who havenā€™t voted for them yet. If they lack the competence to purchase what they have money for, then they have no power over the money theyā€™ve been tasked with and need to be removed. This specific case had three very important ministers to be replaced now (and one more when the leak was first discovered) out of a total tally of 22, and has shown the current government to lack the ability to communicate this type of issues, both specifically and generally (no doubt the same with the alternatives). For whatever reason, the opposition chose not to drop the prime minister, which would have dropped the entire government.

And as serious as it isā€¦ this is likely not the end of it. Most of our (Swedish) authorities and agencies have been outsourcing their shit since the merry 90ā€™s, and there is probably more than 50 of them. Considering that this leak has been known by the government for 7 months before going public, there is likely to be more, and that may kind of drop the current government, after being looked into by the Swedish supreme court counterpart. If that happens, then there will be a gauntlet run scapegoating the current government for all the sins committed. I would carefully hope that there will be a change in the current relation between the politicians and the private/consultant sector, because I dare not imagine a worse scenario. This scandal will almost certainly impact on the election next year. It is even more so a scandal that the prime minister claims (possibly truthfully) he had no clue.

Mind you, in Sweden we probably wouldnā€™t have freaked out as much had it not been for the Russian scare running hot, and the fact the data appears to also contain military personnel information - such as who can fly a fighter plane. The ludicrous idea of over-optimizing the authority costs has been running for quite some time, and it is a common perception we havenā€™t seen the money from these ā€œsavings on tax spendingsā€ come to benefit the public. Primarily lobbied by the IT sector in love with the tax money being thrown at their feet by people who donā€™t know shit (just like any other country, really).

It is still unclear whether it is the intelligence sector, or the private sector with whom the data is most safe. Iā€™d kind of like to say ā€œwith the open source sectorā€, but it also depends on whether the open source users are really reviewing the open source code, or just reusing it falsely believing that someone else has reviewed it.