I'll dig up some links for you tomorrow. But off the top of my mind, you can take a look at these two books! Both are excellent summaries of contemporary research, both of the authors and their peers. Also, I highly recommend Fodor's LRB review, and the reading list of David Poeppel (he is currently the Director of Max Planck Institute, and perhaps the most well known neuroscientist working on computationalist theories of cognition).
Kluge: The Haphazard Construction of the Human Mind.
The Myth of Mirror Neurons.
I must say though, I agree with Minsky wholeheartedly! He was a rare genius! I remember, at MIT-150: The Golden Age: The Original Roots of A.I., Pinker asked Chomsky what he felt about statistical/big data approaches to A.I., and Chomsky said something to the effect of, "They can capture patters, and generalize over them very well. But intellectually, if you are looking for explanations for, say, why humans can write poems but gorillas cannot, there these methods are pointless". In the second half, Patrick Winston was giving his talk and he said how "Marvin never got to answer Pinker's question, only Noam did. But in short, he agrees with Noam.", and the whole hall started laughing because Minsky and Chomsky never agreed on anything!
But they did agree on this issue. And I think for good reasons. Achieving the kind of AI we see in self-driving cars, or neural networks that can differentiate between cats is fine. They have a purpose! But Minsky was right that these are not truly intelligent things! You cannot have a proper conversation with them, to begin with. They will not be able to make moral/ethical decisions, not without significant amounts of IF-THEN, OR-ELSE clauses built in, and even then if you ask them something that slightly bends the patterns they have been trained on, they fail to creatively expand upon their experiences. Human children, on the other hand, do far more complicated things with far less stimulus in the way of experience (what's known as the poverty of stimulus argument). That does not, of course, make the A.I. of the present kind useless. Nobody would say that, and I don't think Minsky ever said that.
But, consider this. Unless we invest money in research that looks at causal roots of human abilities, including issues related to consciousness, we will never know what makes humans separate from the higher apes? We do not achieve our creative potentials merely by mimicking (which orangutans can do very well. See below). We have a generative algorithm that takes limited materials as priors, and then generates infinite new and contrastive recombinations from them. So there is a scaling up, in this sense, in a magnitude that it unattainable by any non-biological system (yet)! The nature of this algorithm, and how it came to be, and how something abstract (in that the algorithm and its structures are not rooted in any substance, kinda like numbers) is implemented in the embodied brain was Minsky's, and Chomsky's, main interest (though they disagree on everything, except that this is the most important question)! For Minsky, understanding this was the key to creating machines that would be indistinguishable from biological organisms! For Chomsky, understanding these is key to explaining human nature. There's very little overlap, but the little overlap there is concerns the key issue!
Beyond the fancy, and often fictional, ideas of things like HAL, this also has more immediate consequences. For instance, children born with specific language impairments, or with the rhythm impairment, or aphasia etc. can be treated or cured if you know exactly what computational mechanisms the brain uses, and how, to achieve these cognitive abilities. For instance, an adult who looses the ability to process natural language syntax due to injury to the left hemisphere can learn to transfer some of the responsibilities to the right hemisphere. Children born with innate language impairment, however, cannot. Why? Is the second case an issue of software defect as opposed to hardware defect (the adult case)? If the hardware is damaged, you can transfer the software to another substrate. But if it's the software that's defective, adding more hardware is not going to solve the problem. Things like these require causal explanations, and all Minsky was saying is that you cannot find these explanations by just throwing data at it.
I think everyone agrees that the brain is a computer of some kind. There was some resistance to the analogy in the '60s because the brain processes multiple things in parallel, which early computers couldn't. But GPUs are good examples of processors that do parallel stuff, and are not limited to serial functions. I think everyone also agrees that neural computation is substrate-agnostic. That is the computational properties are not rooted in the material components of the brain (there's nothing unique about what we are made of), but how they are put together. So you could also, possibly, emulate the mind on some other substrate.
What remains, then, is to decode what the lines of the code are. This is where it gets very difficult. The comparison with other devices is not very helpful. Because we know how those devices work! We made them, we wrote their code! The mind, on the other hand, is the work of evolution. We did not design it, and trying to understand its software is kind of like trying to understand an alien programming language, without any guides as to what the basics of the language are. Reverse engineering is just not an option here. And Minsky, who understood this, wanted people to acknowledge that while looking at some of what the brain does, and trying to approximate some of them without worrying about how the brain does it would lead to false gratifications! You may be able to perform menial tasks, but the larger issues will keep evading you. And in the longer run, the kind of A.I. you can create will also be severely limited. He merely wanted people to acknowledge that there is a distinction, and while creating something that can actually pass the Turing Test (without cheating) can be frustratingly difficult, ignoring the problems won't make them disappear! His aversion to connectionism was also rooted in his interest to decode the software side of the brain. He thought, rightly, understanding that aspect of things would help us make truly intelligent machines. Like ourselves. But you can no more explain how the brain does the various things we can do by appealing to the large number of neurons than you can explain how any OS does something by saying the computer has a powerful processor! That, according to Minsky, is not an explanation at all! No one really denies the basics of connectionism.... the incredibly large number of neurons does help! Like a very powerful processor does help. But... there's still the OS and its kernel, and the processor or its power alone do not explain their architecture.