L1News: 2017-01-24 If you burn a thought, is it a cog-ignition? | Level One Techs

What is the favorite neural transmitter of copy machines?
sero-toner

One-Tab Link: http://www.one-tab.com/page/ks1a_13aRGmoI45Ujtk8OA


00:00 Intro

00:20 Store Plug
store.level1techs.com

00:41 Giuliani Hacked
https://www.scmagazine.com/giuliani-and-top-trump-white-house-officials-hacked-passwords-leaked/article/632676/

02:36 London University Email Monitoring
http://www.independent.co.uk/student/news/kings-college-london-prevent-anti-terror-london-university-islamaphobia-monitoring-student-emails-a7538931.html

04:08 OnStar and SiriusXM Tracking
https://www.techdirt.com/articles/20170116/09333936490/law-enforcement-has-been-using-onstar-siriusxm-to-eavesdrop-track-car-locations-more-than-15-years.shtml

06:18 Symantec issues illegit HTTPS certificates
http://arstechnica.com/security/2017/01/already-on-probation-symantec-issues-more-illegit-https-certificates/

08:40 Last Mile Internet costs
http://arstechnica.com/information-technology/2017/01/when-home-internet-service-costs-5000-or-even-15000/

13:27 Raspberry PI alternative from ASUS?
hackaday.com/2017/01/21/a-motherboard-manufacturers-take-on-a-raspberry-pi-competitor/

14:46 Tesla Autopilot Good
https://techcrunch.com/2017/01/19/nhtsas-full-final-investigation-into-teslas-autopilot-shows-40-crash-rate-reduction/
https://www.slashgear.com/tesla-autopilot-update-now-rolling-out-with-new-semi-autonomous-features-22472604/

19:17 Areas of AI
<a href"https://medium.com/@NathanBenaich/6-areas-of-artificial-intelligence-to-watch-closely-673d590aa8aa#.2athtn5c2">https://medium.com/@NathanBenaich/6-areas-of-artificial-intelligence-to-watch-closely-673d590aa8aa#.2athtn5c2

20:07 AI and Neuroscience
http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1005268

25:27 Make AI See the World as Humans Do
http://www.mccormick.northwestern.edu/news/articles/2017/01/making-ai-systems-see-the-world-as-humans-do.html

26:16 Reddit AMA on AI
https://www.reddit.com/r/science/comments/5nqdo7/science_ama_series_im_joanna_bryson_a_professor/

26:51 Apple Suing Qualcomm
http://www.cnbc.com/2017/01/20/apple-sues-qualcomm-for-1-billion.html

30:18 Nintendo Message for Hackers
https://waypoint.vice.com/en_us/article/of-course-nintendo-left-a-cute-message-for-hackers-inside-the-nes-classic?utm_source=vicefbusads

32:15 Soft Exosuits
https://wyss.harvard.edu/soft-exosuit-economies-understanding-the-costs-of-lightening-the-load/

34:20 Solar Power Prices
https://electrek.co/2016/09/26/solar-power-cost-down-25-in-five-months-theres-no-reason-why-the-cost-of-solar-will-ever-increase-again/

35:43 Silicon Energy Storage
http://newatlas.com/cheap-solar-energy-molten-silicon/45833/

37:10 Lab Grown Meat
http://www.foodsafetynews.com/2017/01/clean-safe-humane-producers-say-lab-meat-is-a-triple-win/#.WIUS7xsrKUn

39:38 Star Trek Fan Film Settlement
http://www.hollywoodreporter.com/thr-esq/cbs-paramount-settle-lawsuit-star-trek-fan-film-966433

42:53 Outro



This is a companion discussion topic for the original entry at https://level1techs.com/video/l1news-2017-01-24-if-you-burn-thought-it-cog-ignition
3 Likes

I gotta say wendell, i cant wait either for you to get your hands on that asus tinker board too. I hope good things to come from that but who knows. Hopefully it does have the huge support that the pi does have

Yeah, the Tinker Board is something that really interested me too. I really like the idea of these mini DIY computers.

Off topic from that:

Russian's are already ahead of the curve on the dashboard camera thing. Most of the best car crash compilations come from them.

As for On-Star, the driver data that they have collected will be very valuable to the future of self driving cars. It could be worth a lot to them.

I'm not surprised that On-Star has been doing this.

Also, on the NIntendo thing:

There have been some tests out there that show that the NES Mini get's really low input lag when plugged into a TV through HDMI when compared to a Raspberry PI with the same set-up. This could be quite interesting to some people.

Input lag has always been a problem with these older 8-bit and 16-bit games. Some emulators do alleviate these issues, but if you try and play an original NES on a modern display using the yellow-white RCA jacks, the input lag will off-put a lot of games making them even harder or outright unplayable.

Devices like Framemeister can solve these issues for older consoles. But they are also expensive. The NES Mini is cheaper than a Framemeister and perhaps gives a closer experience to playing these games on a CRT than the Raspberry Pi does (apparently).

Nintendo might be doing some interesting things within the software to reduce input lag.

But the NES Mini still has storage limitations, so it can only hold about 90 games (not the whole library). The Storage on the system is used for individual saves, instruction manual art/ box art and the ROM data. It is limited until someone finds a way to add a Micro SD card reader.

Also, I think the Disney comparison's are apt. They really are trying to "Disney Vault" their old library to make it feel more prestige to their consumer base. The Switch will have this online pay service that will give out two free retro games a month. But then they will take those two games away once the month is over and give you two 'new' retro titles in their place.

When you get right down to it, Nintendo just wants to be Jobs era Apple and Disney combined.

About the London university reading e-mails.

Clearly, this only applies to the e-mail addressed that are hosted by the university.

Why would anyone use those e-mail addresses ?

You might have to use them when talking to the university, but for personal stuff use something else instead.

Thanks for the one tab!

In Regards to smart vehicles (Based on my experience in the automotive industry) i believe there will be a huge market for either older non-smart vehicles or some sort of blocking technology - this will then be made illegal and it will become a mandatory part of vehicle registration that your vehicle be fitted with remote monitoring if not already. We will then see your vehicle talk to computers placed at traffic lights that query your smart vehicle logs looking for things like speeding / dangerous driving and then electronically give you a ticket.

The data of where and how you drive would then be sold to finance companies who would use it to determine what level of driver you are, what insurance u can get and how much you pay.

The advertising agencies would then use the data to see what stores you drive by most and target ads based on that, for example lets say McDonalds reads the data and notices you drive past three stores twice per day, they would then send your smart vehicle an electronic discount linked to your cars UID/MAC/IP which would be automatically redeemed when you go through any of those three drive through s.

you may think i sound crazy, but ask yourselves - "Are we really that far away from this reality?"

2 Likes

For most formal conversations -- setting up collaborations, applying for grants, getting information from other laboratories, peer-reviewing, applications, references, conference registrations etc. -- the academic extension at the end of the email address (.edu, .ac.nz, .edu.au, .ac.uk etc.) are sort of certificates of authenticity without which professional bodies will not respond to your queries. So, it is not as easy as simply using the university email for talking to the university, especially if you have moved beyond the undergraduate phase, or if you are tenured.

So basically, stuff that's related to the university, meaning stuff that the university finds about anyway, even if they weren't spying.

But why would you use it for personal things, like registering for random websites, or talking to your friends?

Also, do you get to keep the e-mail address after you finish university ?

"Nintendo, y u do dis?"
Nintendo:

HTTPS certs:
I just bought a Thawte Cert (through a 3rd party - who bought it from Symantec) for one of our services hosted on a vendor's site. I was never contacted by phone or email to verify that we are who we say we are. Perhaps my vendor took care of that, but I don't know.


Last Mile Internet Costs:
When our street finally got cable TV, it was incumbent on us (the home owner) to run our own conduit per their specifications from our home point of entry to the point in the easement where they would tie us in. We had to do that at our own cost. And for us, our street is somewhat rural, with homes on multiple acres. However, you only had to install the conduit. The cable company provided the wiring (RG6).

The cable company would offer to do it for you, but it incurred a price, thousands of dollars, due to the length of the run and the amount of hours it would take.


Raspberry Pi alternative:
I welcome these Raspberry Pi alternatives. I think that these are ultimately going to be the cheap systems that push Linux adoption. However, the GPUs need to be fully supported. This is still a huge problem, even for the Raspberry Pi.


Solar Power:
As in places like Nevada, solar panels are effectively banned because the state has a stake in the power generation. Maybe when Nevada runs out of water to run hydro electric dams they'll change their minds. Probably not.

Not always directly related to the university! Let me give you an example: I mainly work with computational neurobiology of language. Suppose I wanted to collaborate with someone who works on, say, machine learning, for a paper I want to publish. Paper's are published, and the research done, by researchers on their own, using the university's resources, but free from interference (unless you do something that will damage the university's reputation). So the university wouldn't necessarily know, nor care, if I am collaborating with someone from the other side of the globe. BUT, suppose my collaborator fakes his data, and the publication gets in hot water... then my first line of defense when the university asks me "Why did you involve a shady character?" would be, "I didn't know! He was employed by Harvard, he had an harvard.edu email... how would I know someone like that would turn out to be a douchebag?" It's not so much compulsion, as it is a legal waiver (sort of) that people use to cover their own asses. Kinda like how they make you scroll all the way down to the end of the EULA before the "I Agree" button is available, even though 90% of us never read the damn thing. But since you have scrolled to the end of it, you have freed the provider from liabilities!

Sometimes... It lets you attend and register for peer review and developmental events like Resbaaz for free. You can take any and all of their offerings or courses for free too, which otherwise you will be paying through your nostrils. Also,for instance, and again a personal example, I subscribe to Nature Neuroscience, Theoretical Biology, Linguistic Inquiry and Biolinguistics journals! The yearly subscription rates are astronomical for most of them (all put together, close to a few thousand per six months), unless you get a higher education discount. That requires you to use an official academic email address! Same goes for my Matlab, Simulink, R Studio purchases! Matlab reduces the price by almost 90%. Last I renewed was in January 2016, for two years, and the differences were significant: normal rates: Matlab = NZD 4120, and additional toolboxes = NZD 85 each, but with the discount it was Matlab: NZD: 59, and additional toolboxes = NZD 12.00 each.

Depends on a number of factors! First if your email address has an extension with the word 'student' in it, which is what they do for undergraduates (@student.uni/lab name.ac.country or @student.uni/lab name.edu.country), then you will probably lose access to it a few months after graduation. I am not very sure about the US (because I did not study in the US after high school), but in Europe, Australia and NZ, PhD scholars are treated on par with faculty, and get a normal faculty email address (@uni/lab name.ac.country or @uni/lab name.edu.country). For the latter, if you finish your thesis and it is accepted for publication post peer-review, the university will award you the degree and let you keep the email as long as you want. It's a reward for the credit they get from your research, and lets you access reduced prices on academic and research related services.

This is not a very representative article of cognitive neuroscience at all! First, I don't recognize the authors by name. Which in and of itself is fine... I don't have to know everyone and everything. But their very first few lines in the abstract are giveaways that they are trying to pass away their personal adherence to/repulsion of a very narrow theory of eliminative connectionism (or something related to that view of things) as the orthodox theory in the field!

There is a popular belief in neuroscience that we are primarily data limited, and that producing large, multimodal, and complex datasets will, with the help of advanced data analysis algorithms, lead to fundamental insights into the way the brain processes information.

This is not only untrue, but it is fundamentally misrepresenting the field. Unfortunately, cognitive (neuro)science and evolutionary genetics are two areas where people would slit the throats of representatives from the other side if they could get away with it. So I am not surprised.

But, to get back to the point, the idea that the brain is fundamentally an information processing unit is either a strawman (in that nobody would deny it) or fallacious (in that you are not really saying anything about what it does. Like how saying 'sugar is made of sweet' doesn't tell you anything about the structure of sugar). During the late '90s and since, the falling prices of neuroimaging techniques, and the inherent difficulties in actually coming up with a causal explanation of things, drove a lot of neuroscientists (who were completely ignorant about the mathematical foundations of Modal Logic, Propositional Attitudes, Language etc.) to randomly produce brain scans, with colorful images of cortical regions, and subsequently claiming they have found something interesting! There's nothing new in that... everybody understands that consciousness is not going to fall out of localization studies of cortical regions. In fact, the father of the Computational Theory of Mind, Jerry Fodor, has an entertaining and very amusingly readable article on the futility of the methods adopted by certain neuroscientists in the London Review of Books! I highly recommend it to lay readers of neuroscience for a better understanding of why, and how, meaningless brain scans tell us almost nothing about the brain. And I say this as someone whose professional career revolves around proper use of neuroimaging and neuroresponses...

The same feeling was expressed by David Poeppel and Gregory Hickok, the two premiere cognitive neuroscietists in the world, about the SFN - 2016 conference, almost twenty years since Fodor's lament! All haphazard data, not much insight. But there are exceptions, and Poeppel himself lists some cognitive neuroscientists whose works produce meaningful computational understandings of the mind! Here... Talking Brains!

Long story short... I fail to see the point of the paper linked in the news! Everything they have replicated everyone already knew! If you know that, say for example, cutting off your index finger without anesthesia hurts, do you need to repeat that process with a different finger, may be a toe, to confirm the finding? Theories of perception and standard biology should tell you "No"! These are meaningless wastes of public money, and all research is funded by public money (unless it can be weaponised, which is when Lockheed Martin would bring out their wallet). And when you waste money on these meaningless replications, you are certainly taking it out of other meaningful areas that may be do not involve neuroimaging!

The recognition of this is growing, and which is why I felt that the characterization of the field in the abstract was misleading. In fact, if you look at the recent big money grants from the NSF, they all go to prorgams that literally look Beyond Big Data!

The neuroscientist Gary Marcus also has an amusing article on the problems of Big Data approaches to Science, which has this very humorous graph in the beginning, and it hits the nail right on the head! And throwing data at modelling algorithms has only done harm to Chemistry. So, I don't think anyone really thinks that we are limited by data! It seems to be an assumption the authors have, which they are imposing on the field.

1 Like

On rasberry pi alternatives I would look at either the pine A64 or Banana Pi that has an All Winner A64 both are cheaper and more powerful than the 3.

And they have proper gigabit Ethernet.

How exactly do you use university resources without the university knowing about it.

Well... if I'm running a particular set of experiments, other than anyone who also wants to use that same lab and had to schedule their time around mine, nobody really keeps a tab on what I am doing. They might ask me "why" or "what for" in an acquisition form if I ask for some resources (consumable or instrumental) to be bought for a study, but otherwise nobody really follows you around once you've been given access to use something by the university.

Can you suggest some additional reading material for me? :)
I've only got articles like this and the adapted mind under my belt. Which is not really neuroscience at all.

On the computer science side of it, I've done a lot of reading on AI from the Minsky era and I can't help but notice certain parallels between "the gospel" of how one approaches artificial intelligence as written by Minsky and the practical approach employed by google and others. When I was first learning about it, I was having a hard time reconciling what I'd read in Minsky this new stuff that worked so well.

In general, for my view, the problem for computer scientists with the earlier approaches had been a problem of scale. The approach to these problems works in limited domains, but the generic small-scale approach does not scale up or out into something more useful. It's sometimes called a framing problem? In CS, usually that's useful but in the case of AI it totally wasn't useful at all.

Not knowing enough about neuroscience, it's tempting to look at a bunch of simple machines -- neurons -- and how they're interconnected in a very similar way. It "seems like" on a small scale it scales up just fine. Sure the number of interconnections increases exponentially but honestly thats just not much of a problem when we're talking about multi-gpu servers, racks of servers, high speed interconnects and massively parallel operations.

So big data has lead to at least a few good things: https://research.googleblog.com/2015/08/the-neural-networks-behind-google-voice.html

This neural network is downloadable now, and is a component in many AIs including Lucida. It's as good as it is because the dataset that was used to generate it was insanely huge. The neural net itself is only a few hundred megabytes, though, and it's a really good/downloadable/works offline/etc. It's still kinda cheating because it was highly supervised. In fact the little blog post from google there talks about how they tweaked their approach in how they fed the RNN with "big data" so the ai would learn regional/accent nuances differently than just one big vat of pronunciations.

And so we have these glorious models made with clever hackey and supervised machine learning -- there still isn't a generic, scalable approach.

For the article above, it may just be a bunch of under grads. I'm not sure?

1 Like

Re AI: Whether or not consciousness is the result of AI research and development probably has no bearing on it's ability to function as well of better than a human. Unfortunately, the notion of the homunculus is the worldview of the vast majority. The notion that we are in the driver's seat of the biology is demonstrated to be false in not only behavioral science, but also cognitive science. There really is no functional value in what we call consciousness. We really have no meaningful description of the phenomenon. The notion of consciousness itself appears to be a red herring. Such a high level metric would not only need to be demonstrated to exist in a meaningful way, but it would also need to be congruent with low level processes. This would require some unifying interface between the two.

A lot of the study in AI research is revolving around pattern recognition and logic; lent by cognitive science. One of the leading AI researchers has a free book that is an abstract of sorts for his work in the field.

http://www.goertzel.org/HiddenPattern_march_4_06.pdf

Re in vetro meat: We already have serious economic issues with cattle production. The bulk of rainforest depletion is due to cattle grazing. It's even encroaching on the protected lands at an alarming rate. It used to be soy crop farming for the purpose of feeding cattle that was the cause of deforestation. The correlation is pretty clear. Many studies have shown cattle to be inefficient as a source of protein. Like it or not, cattle farming is contributing to climate issues by being destructive to the natural systems that mitigate atmospheric carbon. It's not just fossil fuels. The financial issues with in vetro meat are just that... financial. It has nothing to do with economics.

Many studies have shown that humans in general eat too much meat as per their dietary requirements. This adds to issues with healthcare that effect other aspects of the social system as well. This isn't accounted for in the political discussions on the topic. They tend to be centered around the rights of individuals or the cows, chickens, pigs... Meat producers like all other corporations are expected to grow; even if the demand for their goods isn't present. "Thanks stockholders". This is another case of financial imperatives creating economic issues.

In vetro meat could be much better tasting than naturally grown meat. This is because the environment is more controlled. Incidents of disease could be mitigated, the control over the marbling might be better, the control over the connective tissues might be better... This might produce a more tender and tasty steak in the coming years.

It's much easier to stack labs than it is to stack fields too. We are eventually going to have to deal with the issue of deforestation; even if "big beef" is bigger than Opra ever was.

Re free markets: Adam Smith, as brilliant as he was, didn't have the benefit of the understanding of natural systems that we have today. He died decades before Darwin was even born. The hierarchy of systems is essentially a froth of feedback loops that can actually produce predictive value for those with the proper axioms. The understanding that the behaviors of subsystems is a response to behaviors of systems and other subsystems is a very good way to consider the feasibility of a free market; as a free market would be effected by those initial conditions. It's much easier to speak on free market economics under the current conditions; as that is what is being attended. Under the current conditions, liberation of markets produces more powerful and less economic entities. The problem with free market economics as it is, is that there is a much deeper and more significant issue. The problem is with the general state of the system itself. The model is not congruent with human predispositions to behavior. The system in and of itself is an incoherent artifice. A free market system couldn't be expected to function in the desired manner under the current initial conditions.

I was being nice in the first paragraph. Now it's time to be realistic.

Under any conditions, free markets are vulnerable to the issues of the individuals that are participating in them. There will always be normative attractors in any type of biological system. This is because of the environmental pressure that resulted in the predisposition to self-preservation that evolution endowed us with. There is also the human motivational system to contend with. The anticipation of reward is powerful; more powerful than actually getting a reward. It's probably because it's for the purpose of motivation under difficult circumstances and not so much for the acquisition. In essence, we are not motivated to do what works. We are motivated to survive difficult times. Behavior is the result of perceptions of environmental stimuli; not necessarily the actual initial conditions themselves. People can behave appropriately for a specific perception without that perception being the most accurate approximation of reality. This results in inappropriate behavior; and it's as easy as believing a lie. The predisposition toward normative functions still exists though. When the issues become so prominent that the evidence for them becomes undeniable, the behaviors in general become more normative. This allows us to be resilient against adversity and incoherent perceptions as well.

The notion of free markets was popularized by the old school Libertarians. They believed that no entity had the right to impose restrictions on private endeavors. This is not likely to be carried out in social structuring as humans are predisposed to investing in the strengths of others. This tends to be expressed in the creation of leadership roles; not only in humans but in all biology. Free markets would be vulnerable to people suffering from the insecurity of psychopathy or any other serious issue. The notion that there could exist the lack of a socially normalizing entity in any biological system appears to be nothing but fantasy. Humans are part of humanity; and nothing less than changing humans into something that could no longer be called human could change that. Markets aren't economically valuable if they don't produce a more general value.

Too much liberty in the marketplace will almost certainly separate markets from their natural function. Economics is naturally messy. Entropy and extinction will be the most prevalent occurrences in any natural system. Normative function is must for survival alone; not to mention thriving in an environment. Where political nonsense fails, natural predispositions clean up the mess... through normative function. History is full of these instances.

I'll dig up some links for you tomorrow. But off the top of my mind, you can take a look at these two books! Both are excellent summaries of contemporary research, both of the authors and their peers. Also, I highly recommend Fodor's LRB review, and the reading list of David Poeppel (he is currently the Director of Max Planck Institute, and perhaps the most well known neuroscientist working on computationalist theories of cognition).

Kluge: The Haphazard Construction of the Human Mind.
The Myth of Mirror Neurons.

I must say though, I agree with Minsky wholeheartedly! He was a rare genius! I remember, at MIT-150: The Golden Age: The Original Roots of A.I., Pinker asked Chomsky what he felt about statistical/big data approaches to A.I., and Chomsky said something to the effect of, "They can capture patters, and generalize over them very well. But intellectually, if you are looking for explanations for, say, why humans can write poems but gorillas cannot, there these methods are pointless". In the second half, Patrick Winston was giving his talk and he said how "Marvin never got to answer Pinker's question, only Noam did. But in short, he agrees with Noam.", and the whole hall started laughing because Minsky and Chomsky never agreed on anything!

But they did agree on this issue. And I think for good reasons. Achieving the kind of AI we see in self-driving cars, or neural networks that can differentiate between cats is fine. They have a purpose! But Minsky was right that these are not truly intelligent things! You cannot have a proper conversation with them, to begin with. They will not be able to make moral/ethical decisions, not without significant amounts of IF-THEN, OR-ELSE clauses built in, and even then if you ask them something that slightly bends the patterns they have been trained on, they fail to creatively expand upon their experiences. Human children, on the other hand, do far more complicated things with far less stimulus in the way of experience (what's known as the poverty of stimulus argument). That does not, of course, make the A.I. of the present kind useless. Nobody would say that, and I don't think Minsky ever said that.

But, consider this. Unless we invest money in research that looks at causal roots of human abilities, including issues related to consciousness, we will never know what makes humans separate from the higher apes? We do not achieve our creative potentials merely by mimicking (which orangutans can do very well. See below). We have a generative algorithm that takes limited materials as priors, and then generates infinite new and contrastive recombinations from them. So there is a scaling up, in this sense, in a magnitude that it unattainable by any non-biological system (yet)! The nature of this algorithm, and how it came to be, and how something abstract (in that the algorithm and its structures are not rooted in any substance, kinda like numbers) is implemented in the embodied brain was Minsky's, and Chomsky's, main interest (though they disagree on everything, except that this is the most important question)! For Minsky, understanding this was the key to creating machines that would be indistinguishable from biological organisms! For Chomsky, understanding these is key to explaining human nature. There's very little overlap, but the little overlap there is concerns the key issue!

Beyond the fancy, and often fictional, ideas of things like HAL, this also has more immediate consequences. For instance, children born with specific language impairments, or with the rhythm impairment, or aphasia etc. can be treated or cured if you know exactly what computational mechanisms the brain uses, and how, to achieve these cognitive abilities. For instance, an adult who looses the ability to process natural language syntax due to injury to the left hemisphere can learn to transfer some of the responsibilities to the right hemisphere. Children born with innate language impairment, however, cannot. Why? Is the second case an issue of software defect as opposed to hardware defect (the adult case)? If the hardware is damaged, you can transfer the software to another substrate. But if it's the software that's defective, adding more hardware is not going to solve the problem. Things like these require causal explanations, and all Minsky was saying is that you cannot find these explanations by just throwing data at it.

I think everyone agrees that the brain is a computer of some kind. There was some resistance to the analogy in the '60s because the brain processes multiple things in parallel, which early computers couldn't. But GPUs are good examples of processors that do parallel stuff, and are not limited to serial functions. I think everyone also agrees that neural computation is substrate-agnostic. That is the computational properties are not rooted in the material components of the brain (there's nothing unique about what we are made of), but how they are put together. So you could also, possibly, emulate the mind on some other substrate.

What remains, then, is to decode what the lines of the code are. This is where it gets very difficult. The comparison with other devices is not very helpful. Because we know how those devices work! We made them, we wrote their code! The mind, on the other hand, is the work of evolution. We did not design it, and trying to understand its software is kind of like trying to understand an alien programming language, without any guides as to what the basics of the language are. Reverse engineering is just not an option here. And Minsky, who understood this, wanted people to acknowledge that while looking at some of what the brain does, and trying to approximate some of them without worrying about how the brain does it would lead to false gratifications! You may be able to perform menial tasks, but the larger issues will keep evading you. And in the longer run, the kind of A.I. you can create will also be severely limited. He merely wanted people to acknowledge that there is a distinction, and while creating something that can actually pass the Turing Test (without cheating) can be frustratingly difficult, ignoring the problems won't make them disappear! His aversion to connectionism was also rooted in his interest to decode the software side of the brain. He thought, rightly, understanding that aspect of things would help us make truly intelligent machines. Like ourselves. But you can no more explain how the brain does the various things we can do by appealing to the large number of neurons than you can explain how any OS does something by saying the computer has a powerful processor! That, according to Minsky, is not an explanation at all! No one really denies the basics of connectionism.... the incredibly large number of neurons does help! Like a very powerful processor does help. But... there's still the OS and its kernel, and the processor or its power alone do not explain their architecture.

1 Like

I do not think consciousness is the hard problem, really! It really is the mechanism of preconsciousness. For instance, when you do something like picking up a glass, the associated cortical regions become activated a few milliseconds prior to the time-gap required for the signal to reach the hand from the brain. Subsequently, a vast majority of these never reach conscious stages. It really is the preconsciousness and what triggers it that is important, and supremely so, in understanding why organism behave the way they do. Which in turn will have implications for genetics.

I cannot understand how this paragraph sticks together. There is a LOT of functional value to what is referred to as consciousness, and even people like Lewontin and Dennett (two extremes of the scale) would agree. Similarly, the existence of consciousness, or what is meant by it, has over forty years of evidence for it. Perhaps what you mean is that there is no restrictive definition of what constitutes consciousness?

While I agree, mostly, with your views on free markets, I would also add that the view of Adam Smith parroted by libertarians (which is usually an euphemism of selfishness) is a false one. On closer reading, Smith appears to be a very radical intellectual, not unlike other figures of the Scottish enlightenment! For one, people are always attributing Division of Labor to Smith which he abhorred as a repugnant system.

Also, it is worth mentioning that what goes as Free Market these days is actually State-controlled capitalism, with significant statist intervention to ensure that wealth remains concentrated. The United Fruit Company, and the many coups orchestrated by the CIA in South America against democratically elected governments is the glowing example of that. It's not free market, it's a market that's free for specific entities to use as they wish.

There really is no excepted definition of consciousness. There's no reason to think that introspective awareness couldn't be achieved with information processing. If consciousness is more than the experience of introspective awareness and "is your red the same as mine?", no one is providing good argumentation for it; much less evidence. People are upwards of 90% impulsive; and what is considered consciousness by even the experts occurs after the fact, as some sort of acknowledgement. There's really no reason to think that behavioral evidence for consciousness could exist. Where would functionality exist beyond behavior?

Yes, the "people are naturally greedy" meme is probably just justification for bad behavior. Adam Smith had an understanding that self interest needed to scale. Being a part of an economy or ecology that fails miserably is not in anyone's interest. That is the premise of the "Invisible Hand of God". Physics led Albert Einstein and David Bohm that direction as well.

The US was intended to be a Democratic Republic with two parties; one that represents Democracy and one that represents the republic. It's kind of strange because the only democratic aspect was the ability to vote for representatives; or even to be one, if you can get the votes. Many now argue that it's a Representative Republic. The evidence however has suggested that it's an oligarchy; as of late. I'm sure you are aware of the Princeton paper; which was pay walled in Cambridge for years, but is now open access.

https://www.cambridge.org/core/services/aop-cambridge-core/content/view/62327F513959D0A304D4893B382B992B/S1537592714001595a.pdf/div-class-title-testing-theories-of-american-politics-elites-interest-groups-and-average-citizens-div.pdf

My point however is that a free market system is beyond what humans are capable of behaviorally.

1 Like