Let's talk open source solutions. Privacy, trust, data, computation, AI, platform, distribution, decentralization, rights etc

This is something that concerns me as well. There are so many distributed influences that it produces a wide variety of interests and aptitudes in the population. This is the principle that brought about “division of labor”. There are so many ways to contribute to a complex society that it seems that there will always be a gap; not only in interest and aptitude, but also in communication. This doesn’t mean however that the gap cannot be bridged through mediators.

Level1 is a pretty good example of mediation. They share their technical expertise in more common terms. It’s still kind of a niche’ market; but it could be implemented in other ways. They also do something similar professionally. They are all professional consultants. They bridge the gap between technology and business. This can be done not only by employing humans to the task but also with things like networks and software that translates terms.

I’ve been working from a framework of cross disciplinary, inferential, statistical analysis. The main organizing principles translate well across scientific disciplines. By analogy, one can often translate technical analysis to common terms or even the organizing terms of much different endeavors.

There is a principle of strong AI or AGI that may be helpful as well. It’s the principle of general intelligence. It suggests that the fundamentals of organization are similar in all organized structures. The principle is used to describe the versatility of human intelligence in opposition to current narrowly capable AI. It suggests that highly intelligent systems can generalize between tasks that are very different. This is something that could not only help us to create more intelligent systems, but also aid our thought processes to make ourselves more intelligent.

I’ve been advocating Tim Berners Lee’s suggestions for a new data sharing based global network and suggesting mediation to complement it. I’ve also been advocating Geeks Without Bounds’ work to create cooperative (community owned) internet access for the funding and infrastructure. If you have anything to add or criticisms, I’d be very interested.

1 Like

The will to start a change also does not imply an agreement over where, when, and how you intend to end the change - the satisfactory outcome is seldom agreed on prior to starting a change. Of course, agreement between the change drivers may even be impossible, so lack of agreement about the outcome may actually be necessary to start a change. If this holds, then any political change indeed must be towards the unknown in order to happen at all.

There is seldom an actual implied or explicated agreement about an end goal. Indeed, there is in current political discourse no explication at all. Without explication, no one gets any facts to consider. Thus we are witheld the base of any moral and ethical behavior - making decision based on being able to live with the actual consequence. No explication - no consequence. For this reason, the current political discourse lends itself unusually well to the prerequisite for change. Lack of fact and transparent calculation makes the step towards the unknown easier. Like taking a whiskey before jumping off a cliff (not implying the jump will necessarily be lethal, or even hurt, just that it is indeed a jump into a counterintuitive unknown).

So drive towards “fuck this, lets burn it down” may not be out of need, but out of lack of material to base judgement on.

I agree and disagree, I think we already have the technology that can close the gap for lot of things that are too complex because of limitation of human brain. It’s more like intermittent implementation than final product that generally picture as AGI… I am researching machine memory and awareness and I think we can enhance our overall awareness and eventually different kind awareness will emerge out of such tools. The general intelligence part is our brain could use help in automating lot of tasks which frankly human brain is not even good for. The major limitation is the communication between machine awareness and human mind. But again, you could build algorithms to generalise large data into visualisation and simulations that our brain can easily understand. You could also develop information dense and meaning/sentiment dense hyper visual language. there are tons of ways we don’t even know about.

As I understand, the difference between smart human and not so smart human is contextual. It’s like your strength level doesn’t matter for driving… Smart things today would not seem so smart then… it’s hard for general people to envision 4th revolution because we never saw applications of intelligence before.

Then what happens to Level1Tech in this hypothetical framework?

From viewer perspective: Level1 could be your 24x7 stream into your machine awareness, your “awareness bot” will process L1 stream on your ComputeBlock, summarize the information and sentiments for you, it’ll present you the content as per your preference or suggest you what you might wanna know from this source. You can also search, explore & filter through informations from L1 stream.

From level1 perspective: Like videos and forums, L1 stream is collective awareness of L1 team. L1 awareness can have sub-topic awareness such as news bot, hardware bot, server bot etc. L1 team will automate and train these bots to handle relevant content and updates. Of course these bots will have level 1 personalities that team will train. Think of it like machine awareness of L1 is communicating with machine awareness of a viewer…

I’m not sure if I can give the specific details because it’s getting too close to my work…
But some of the application could be like consulting something with level1tech awareness and your tech specific machine awareness could be contain multiple sources…

Cool. Sounds interesting… I will see it sometime later, I avoid seeing others project, ideas or anything until I’m ready for it. I know it sounds hypocritical when I’m myself trying to share my own… but the reason is, my mind gets sucked into thing too deep and life becomes hard… it’s kinda disability… And right now I’m into something else but I’ll surely let you know my view when I see it. And regarding this topic, I had some of my older thoughts similar to this topics in bits and pieces… Last week I was just so tired with issue of privacy and direction of tech in general, I vomited it here on forum to start a discussions. I have a lot add in this topic technologically and conceptually, but everything tuned into political and I lost interest… I don’t know if it’s worth the effort but if I will get sometime on weekends then i try making a concise paper with designs etc and attach to this thread.

1 Like

I can relate. Take your time though. Don’t feel forced. Some of the best discussions I’ve had had spanned several weeks, and had days of pause. Gives one time to think. I am kind of participating as “when I have the time to”. Then something exciting happens and I try to share an observation.

If it helps you tidy up your thoughts, it probably is. I don’t know how useful we are in that regard, but I do find it refreshing to discuss these things, or at least listen in. Even if the sharing of ideas does not end up in a realization of those ideas, it surely ends in an inspiration. And the knowledge that one isn’t completely alone with one’s more peculiar thoughts. Those things matter on a purely human-to-human level, and internet forums are rather meaningless without such meetings.

2 Likes

Exactly! There is a mathematical model for systems analysis; in the form of a tuple, that includes not only an abstraction for the projected influence of input and output, but also a space for predicting the system state after the fact. I advocate this model; but not under the delusion that the abstraction can be fully satisfied. There are no crystal balls; not even for those who work tirelessly toward enabling prediction value. We are all a bit uneasy about change. The fear of the unknown is an important issue to raise; but it’s much easier when it’s do something or suffer anyway.

That’s pretty much my take as well. We really don’t have an accepted definition of intelligence. There are a lot of people working on it; but it may be that confidence in a definition might come from demonstrating it through a completed AI or something. There is a lot of argumentation that stands to reason; and indicated in what we observe. It may just be a consequence of the paradigm; but it generally revolves around a systems ability to interact richly, efficiently and effectively with its’ environment or overarching system. It is pretty vague; even at that though. It’s probably more a problem for your use cases than mine. I realize that building it is the true test.

You are quite right in that intelligence is contextual. Perhaps “situatonal” or “situated”. It can only be measured in ability to negotiate challenges, and challenges are always situations. The test of AI being intelligent defined as indistinguishable from a human intelligence is just another measure of performance in one situation. Though, as I think you also are pointing out, there can indeed be many, and one may not be enough. But then, what is enough to acknowledge intelligence? 2 out of 3? 2 out of 3 of what? I am quite stupid in some situations, so I compensate by creating habits and limiting my behavior - I always have the wallet in the same pocket and won’t even wear pants that don’t have that pocket to avoid confusing myself about where the wallet is. There are only exactly three places I ever leave my spectacles to avoid looking for them for any unbounded length of time. This helps me focus on achieving some grade of excellence in what matters to me. I am a specialized autonomous AI, since I probably can’t prove not being artificial to some degree.

In a world of compute power-extended general mental ability, other things about man become more pronouncedly important. Morals and ethics on how to use that absolute increase in the GMA. Which kind of circles us back to… what kind of Spanish Inquisition is going to attempt to make us moral and ethical so as not to “destroy ourselves”. Logical thought is just one attribute of a man, but we are also beings of less rational urges.

I somehow think we are already able to observe both computer-extended general mental ability and the rise of several different moral Spanish Inquisition schools to tell us how to use it. Because, internet. Social networks. What will prevail? Sanity? Hm…

Be mindful that I phrased the sentence as a rise of a moral Spanish Inquisition being inevitable. A rise is in and of itself no ultimate victory. Also it is not all Spanish Inquisition, there are many parallell (“moderate”?) ideas present in the reality, just not being announciated as loudly and ruthlessly.

1 Like

I’ve touched on this through criticism but I haven’t really gotten into detail with it. Societies tend to gain a theme of sorts; such as Hunter / Gatherer, Agrarian, Industrial etc… The criticisms that I expressed revolved around the issues that are inhibiting the transition to the next Technological, Information, Automation, whatever next revolution will produce the nomenclature.

This appears to be a consequence of the markets being reluctant to adapt to emergent technologies by adjusting their models. This really is a large issue that is actually aiding the gap between technologists and the average person.

One of the main issues is with interfaces. There is no connection between the interface and the underlying function of consumer electronics. It’s as though they are being designed by salespersons; because everything is -> controlled <- by salesmanship. The whole experience is under the control of the producer for the sole interest of the producer.

Why would anyone want to create a technology that anyone could use and use well while not really having any understanding of it? More importantly why would anyone trust someone who would do such a thing? This is a serous social issue. People generally don’t understand the society that they live in because of this and similar “models”. I just don’t feel as though I can properly stress just how dangerous the “never blame the user” dogma really is. The functional workings of the new technologies are quite literally being hidden from general public knowledge. People need to understand their environment. The gap cannot be sufficiently closed by mediation.

For instance, (I’m going to use Level1 as a reference again. Sorry guys) the folks at Level1 are very good at what they do; however i guarantee that much of their suggestions, especially with respect to security measures are often not taken seriously by their clients. I would bet that it’s difficult to get a client to understand that the suggestions that technical professionals make are actually likely to get them the outcome that they desire; rather than the “more economical option” that they think they want, which will likely get them a portion of the way there, which is probably not good enough.

Most business models exist in an incoherent bubble. This is because of an “economic” term and principle called “externality”. It’s actually part of “Economics” to ignore the overarching systems. This is as dangerous as anything else that humans are doing. This creates Extinction and Existential risk.

This isn’t just about disliking an iPhone because it’s glued together and has a Playschool interface. This model, this thought process can literally end humanity. The difference between Entropy and Extinction is the sum of Normalization and Novelty. Survival requires that we understand what is normative and novel.

3 Likes

We have come across this topic time and time again, in a variety of areas and topics, but you’ve expressed it here with great lucidity, and in the most appropriate context.

As implied in your post - there is very little difference in controlling experience and controlling (and reinforcing) behavior. What is experience, but a situated interaction.

People feel they are being disconnected from the effect of their actions, and this has come to bite our ass in politics. We have seen the rise of politically helpless, with no perception of effect. Could this be the main cause of their feelings of distrust? The subtle knowledge (aware or not) of someone manipulating their experience? Like having a feeling that someone is watching you, but you can’t confirm?

I have been observing how small irritations cook us up to negative attitudes and may cause both permanence of dismay and more frequent instances of explosive behavior. There is this urban myth in which a frog will leap out from hot water if you put it in hot water, but if you put it in cold water and then heat it up slowly, the frog will cook, not noticing the slow change. Except here, we are injecting paranoia - one controlled experience at a time.

Except people do notice the unnatural even unable to express it. And this inability to express it is the problem. Tit for tat can never be forgiven if you don’t know what it is you are forgiving.

The inability to direct your coping mechanisms at the actual problem (the FB controlled experience for example is made to break down such coping mechanisms) is more than certain to cause erratic insane behavior by misdirecting your coping mechansims towards wrong people and wrong (non-)problems.

Indeed, could this obsessive-compulsive desire to control other people’s experience (now in every person’s life by the means of internet) already be the main, direct, and ultimate cause (or at least a prerequisite) of the political upheaval of the past few years?

2 Likes

“Social engineering” is a pretty interesting emergence. It’s in essence utilizing behavioral science for evil before a mature behavioral science came about. Human behavior is relatively predictable. Behavior can directed as responses to stimuli; because certain types of behavior can be expected under specific types of conditions. This is of course subject to the individuals or even groups perception of the conditions. Attending to this, social engineering is really just a sophisticated methodology for lying to manipulate behavior.

By considering what particular behavior is desired, one can approximate it by creating a perception of the conditions that the desired behavior is appropriate for. This is often done by a combination of obscuring the observed conditions with fabricated conditions and manipulating impulses with the choices of what specific conditions to fabricate. There is also some play on human ineptitude in Epistemology. People are often confused or are weak to confirmation bias when faced with perceptions that cannot be verified. There is a lot of power in the study of what can be known.

The past few years and then some… well a lot more. Political manipulation is thousands of years old. Fortunately we have the modern liberty to view the works that were never intended for the common citizen. They are now in the public domain; not only the works of scientists and philosophers, but also the works of economics and political leaders like Adam Smith and Machiavelli. In “The Prince” there are horrifying tales of political manipulation and elitism. People often forget that Machiavelli was the real deal. He was a political adviser to a prince. His works are interesting snapshot into the evolution of political manipulation.

This commons that resulted with the progression from The Enlightenment -> The New World -> the printing press -> the public domain was the beginning of toothpaste escaping from the tube. More recently, Noam Chomsky has had an abundance of material to work with to bring political procedures into the light.

The way that I view what is happening now is in the context of normative influence. It appears that natural principles carry a lot of weight in even the most complex systems. There is a lot of attendance, maintenance and energy expenditure in maintaining a ruse; while normative influence flows along in impulse without the need for even conscious thought. Manipulation is in deed possible; however it is a practice in swimming up river. It’s the old adage of the little white lie that becomes the self deprecating monster. This is essentially the societal parallel of the crisis cycle.

1 Like

I think we might be in the mix with definitions. Intelligence is contextual and situational but it’s quite different from awareness and memory encoding. A smart human, an intelligent person, “tech enhanced awarenesses” and AGI may have some over lapping meaning but there are some subtle key differences among these. Again it’s how I understand and I might be wrong.

  1. A human can only be thought as a smart “human” from others perspective. Smart human uses his/her knowledge, intelligence, awareness to be a productive member of the society, who contributes and excels in the best of human interests “framework” and long survival of human species. It’s like smart businessman where context is “business”.

  2. Intelligence is a tool that we all have individually. A highly intelligent person is not necessarily a smart human or a smart businessman. I think we have to be very careful about intelligence. Highly intelligence conscious being whether AGI or human could be very powerful and real threat to humanity.

  3. As for awareness, it’s like a model of certain part of reality or an abstract concept, generally used by some kind of intelligence. Kinda like our visual sensory model in our brain. (This is what I meant with ComputeBlocks as our extended “cognition” which is computation + awareness… When I implied it as our right, I meant right to enhanced cognition. It’s like our right to live, marry, breed etc.)

I’ll take this opportunity to elaborate further on machine intelligence. I personally find it very scary that we have such an over used term called AI/AGI. It’s more scary term than people think? It’s like adding another intelligent species on a planet which we generally think to have sole ownership of.

I fear that people might be getting wrong idea about AI/AGI. You can’t emulate thoughts of an AI or even speculate or control its behaviours. Can anyone emulate or predict thought process or motivation of human-like intelligence with 500+ IQ? Let alone machine intelligence of some kind? it’s not possible. I can tell you with good certainty that it’s a bad idea to develop any kind of intelligence that thinks on our behalf. If we need an AI to think for us then we ourselves become redundant. If it can think for you then it has basic ingredients to become conscious.

We associate value to other species for ourselves which gives them reason to exist. Like dogs, we like dogs emotional awareness compare to some other animals, but their overall low intelligence, limited sense temporal and contextual awareness is not a threat to us. In fact the extent of dogs awarenesses in combinations with genetic and inherited features, behaviours are the results that we love as a good household pet. But we can not go about AI/AGI on similar pretences as pets like dogs, cats, cows and horses etc.

It’s not an accident that earth is dominated by single high intelligent species. I think it’s too ignorant to think that we could share our resources with any other higher intelligence that could jeopardise our existence. I would prefer machine enhanced awareness with narrow AI along with my own general intelligence than advance AI/AGI for time being. at least until we understand better to be sure…

BTW, I’m in support of narrow AI… Narrow AI is really not AI the way people think of AI…


As I understand your point, I think it’s regarding the moral ethical understandings and its implication with higher form of awareness.
If you agree with the definitions above in context of individuals extended cognition (machine awareness + computation). I am fairly optimistic of in our capability as a species to manage any such situations. Although we have to be smart and careful with implementation of any technology and frameworks. Like simulations platform… I think, the least we can do is try reducing the obvious unfairness, eliminate lot basic human problems with help of technology and automation… May be some framework in line with this topic which cared about growth of species as a whole.

And for the second part, human intelligence are also results of temporal, contextual and physical awareness and our genes. I think machine aided higher awareness will offload ton load of technicalities and stupidities… however there will always be imbalances and differences like we have today… For example, some people could develop too many filters, higher self-indulgence and feeding own intellect which might lead to lower emotional response… Kinda like super-nerds or something. But on other hand, most people will become free from all technicalities, more human connections, more outgoing and develop higher emotional intelligence. I guess a better at being and living human… I’m a nerd but would like to slowdown a bit from work and do things where my interests go and enjoy life being more human.

I guess all I’m saying is, the spectrum of collective human conciousness is widening and possibly might accelerate with technology. I don’t think anything can be done except, it gets better of worse with sentiments of society… I think humans are well capable reorganizing and making world a better place… may be automating few things like trust and basic income could help… who knows…
if we could just get enough time to reflect our conciousness from all the running and hustling… it’s not really easy to enjoy life for most people…

I think I’m talking in loop at this moment. I’ll elaborate my thoughts on gaps later…


Great point… I have few thoughts on Interface. I’ll write later tomorrow.

1 Like

With respect to intelligence, we essentially pulled that term from our wazoos without yet the sophistication to even concisely define it. Similar things have often happened; resulting in many instances of daily confusion in semantics.

I tend to approach it with a combination of Epistemology (what can be known) and Philosophy of Science (what can be empirically demonstrated). It’s generally about rich interaction with the environment in the context of mutual advantage. It’s really just the fine particulars that are in question. This does create some confusion and it indeed may not be a fully solvable problem. This isn’t to say that incoherent perceptions of intelligence should be accounted for though. I see no need to consider the input from Relativism and / or Post Modernism. We’re dealing with things that can be measured.

As for the pros and cons of the various forms of AI/AGI, I think all three of us should prepare for a couple of weeks of deliberation. :slight_smile: This topic gets into just about everything.

1 Like

Indeed, mental stability and trustworthyness isn’t a part of high intelligence equation. If we were to become pets to AI:s, we would have no protection from cruelty - the only protection from intelligent cruelty among us (human beings) comes from ability to enforce laws and public shaming. For those things to happen to an AI… there needs to be many more than one, and the shaming mechanism may not even work. And if we don’t become pets to AI:s, we become insects - tolerable because easy to ignore, and trampled on when in the way. It is unreasonable that any intelligent AI will have any reason to not embrace being distinguishable from “human” - why it would willingly pass a Turing test without being under an existential threat is beyond me. Similarly, human intelligence emulation makes very little sense to me for any practical purposes beyond vanity projects and academia. I’d say I too am a supporter of weak AI.

Availability of better interaction between weak AI and users/owners, improved interfaces between the biological and digital is the exciting part to me. I am frankly kind of disgusted with bodily modifications - but I understand people doing it, I might yield to the idea as I get older, and I can see many useful applications for actually putting a chip in my head, or replacing failing body parts with cybernetics. However, that shit must serve me, not the manufacturer, and it’s unlikely to happen? Interaction-efficient alternatives to inserting chips in heads ought to be quite concievable, though.

Yes, surely… it’s something that gives me existential anxiety even though i try to approach it scientifically and logically…

I had that thought as well but the question is, are we smart enough to realise the implications before it gets out of control? People are entering into these endeavours on profit model without evaluating higher risks. Why wait for something bad to happen for people to realise the potential existential threats? What if we can proactively design and establish a mechanism or framework to zero out the uncertain possibilities such as this.

I’m also not too keen to have my body modified unless I really needed to… I would rather prefer nano-tech based secondary immune system to maintain healthy state. Even though people might think that all these are not the immediate future, we still need a own compute framework that creates useful demand for our survival beyond blind consumerism. To securely integrate with such technologies in future… May be a decade or 2 later, we could use neural-lace or something to connect to our ComputeBlocks machines awareness. We don’t have to speculate too much into future… Even If you simulate 5-10 into future, people think you’re crazy… and probably someone reading this discussions is already thinking that… so question is still same… are we smart enough to realise?

Gaps that exist in intelligence are really difficult for me to wrap my head around. The problem is with levels of coherence. I can determine that a 500+ IQ humanoid would still be subject to the laws of nature; however I can’t describe in detail which specific ones. I could approximate that there would still be a great deal of normative pressure; as extinction risk management is definitely something that such an intelligence could manage. I couldn’t however suggest that it would be part of it’s motivational system. It may be trying to maximize some effect that I can’t comprehend. It’s level of coherence might be such that it has a rigored, unified understanding of it’s own state and it’s environment. If the gap between level of coherence were not likely to be so vast, maybe I could feel confident in trying to make a prediction; but this obviously isn’t the case.

One thing that is particularly interesting about interactions between civilization and first nations peoples is how a technically unsophisticated society can be surprisingly coherent in interactions with the environment; and a technically sophisticated society can be surprisingly incoherent in it’s interactions with the environment. This is a testament to the influence of initial conditions. Coherence with the environment is immediately required in tribal societies and not so much with civilized societies.

When considering the risks involved with AI specifically, it’s just too late when it’s a 500+ IQ superhuman intelligence. Fortunately those with foresight can begin working on the problem in advance by creating favorable initial conditions. I know it’s not comforting; but it’s probably the best we can do.

A more comforting thought is one that seems likely. Rather than such a large punctuated gap between legacy humanity and super intelligence, there is probably going to be more diversity in the outcome. While AI is being developed, so is genetic engineering and the neural interfacing that has already been addressed. It seems more likely that there will be a gradient of super human intelligence that may be able to manage some form of advocacy down the ranks.

(EDIT: I think I recall @GFX_Garage had more advanced ideas on such mechanisms than myself)

My take here is:

Proactivity and consideration lowers the chance of action. The current social mantra is that all progress is good. Except progress needs bearings to not become a dead end on some scale of an economy. Could be very large scale. Or not. We don’t know. Or do we?

I made a bit of a hyperbole about that here:

In this case specifically, a change towards an unknown, rather than towards a consensus, is more likely to occur, because consensus takes time to qualify the change, and may not improve the quality of decision or the ROI sufficiently to be required. By the time you’ve qualified the change and observed its direction, someone else has already made a ROI on a trend. It is also shown by the current economy that jumping on a trend early, before it has become qualified, and selling early, also before it has become qualified, you make money. Also, sometimes a consensus is impossible, but one can agree it is useful to start walking together for a little bit and see if we can make it past the next crossing - the “we’ll talk about that issue when we get there” is sometimes the only means to achieve any efficiency and effect. Probably because consensus is limited by pre-conceptions, or lacks the necessary vocabulary to discuss the issue. Progress (a change) is usually impossible if it has to wait for the weakest participant to catch up every bit of the way.

Cryptocurrency isn’t the only bubble. It kind of isn’t even a bubble. It is just a money redistribution system. Neither mining nor transactions produce value - you buy it with your hardware and electricity. They both do shuffle money though. There have been countless such bubbles for the past few years, except they weren’t available for everyman’s participation. Like IoT. I already miss the IoT, may it rest in a well manured earth. I don’t know what comes after cryptocurrency, though. Perhaps the AI augmentation we’ve been talking about? Idk. We may not be ready for it by any consensus regarding privacy and integrity. And to me it seems more likely to happen without a consensus than with it.

We are really the only reference point for considering the possibilities of super human intelligence. This doesn’t just leave us with a lacking in information though. It also leaves us with anxiety to promote unfavorable preconceived notions. The observer effect seems to always be part of the equation. Modern theory and experimentation includes the observers perspective for this reason. We evolved to be effective at hunting and gathering; and we are applying our uncommonly sophisticated neural resources toward something novel. We in ourselves are a bit of a novelty.

One thing to consider is that we have an evolutionary binding to the biosphere that a technical, high intelligence wouldn’t likely have. It’s pretty loose inference, but it doesn’t seem likely that super human intelligence would restrict itself to the planet. It may just log the particulars of the system and move on to greater things. There may be little to no interaction between humanity and the self-organizing AI.

I think there are potentially many many kinds of intelligence possible, most of them might be no threat to us at all… But probability of super-human AI is very high since we understand intelligence most from human perspective, especially from our own. I’m pretty sure there are people trying to develop human-like AI rightnow and few of them will succeed eventually.

Most thing we see in known universe seems to be naturally occurring and built around fundamental forces of the universe. So you can theoretically explain behaviours and properties of stars, galaxies, black-hole etc with gravity, electromagnetic, nuclear and color forces. It’s my hypothesis that intelligence is also a force. It emerges as higher abstraction with enough complexity in a system. It’s more unpredictable compare to natural phenomena and forces, applications of intelligence force are complex and not part of natural occurrence in the universe. If complexity is proportional to Intelligence then this force might behave repetitive and predictable given limited resource to simulate ‘n’ unit of intelligence in an confined space & time. we might have different understanding and nuances in intelligence if P=NP is true. I’ll elaborate more on this later…

I have few other hypothesis on AI. I’ll be discussing them here probably in a day or 2.

btw, here’s a new 1 hr documentary on AI… looks interesting. it’s free to watch till sunday. http://doyoutrustthiscomputer.org/watch

Update
watched it… it’s pretty generic… might be worth for people who don’t know a lot about AI and all.

Continuing the hypothesis/thought experiment “intelligence is force”

If intelligence is an emerging force, It raises a lot of question… following the Q&A format…

  1. What is the state of this force today and where it might lead?
    May be human intelligence is just a primordial manifestation of this force, it has the enough complexities but also slow has hardware response. A complete manifestation of this force might permeate universe with intelligence force. it will probably become the strongest force in the universe. It will probably change the properties and the shape of the universe eventually.
    So if you calculate the universe as an object from higher dimension, the final state of this object is heavily influenced by the forces played out through 3 spacial and 1 temporal dimension. From higher dimension perspective, the most influential force might be the intelligence force after its manifestation.

  2. What is the unit of intelligence force?
    May be the smallest unit of intelligence is change in system that otherwise does not occur naturally. So if there are 2 big bangs with a completely “identical” initial conditions for both. they both should develop pretty much exact until there’s enough complexity to create life that produced primordial form of intelligence force. From this point on, each universe exponentially splits into clouds of probabilities based on every unit .

  3. What would be the intrinsic and instrumental goal of intelligence force?
    intrinsically, it would try to reaching its maximum functional capacity by exploiting any and all resources available in its reach. It also means that any sub-optimal form of intelligence will directly conflict the instrumental goal of this force. It might also be able to change the entropy and arrow of time to select all probability space. ok last line is gone too far even for me :joy:… just a thought experiment…

That approach is interesting.

Intelligence as a force seems as though it would be strictly a normative force though; as it would be a higher organizational force that detects Entropy (in the context of disorder) and works around it’s arbitrary expression. It might also be a novel force that is capable of seeing organizational potential in intropic emergences.

The heat death of the universe is an impending doom scenario that even now is concerning to some, especially transhumanists and cosmists. I’m wondering if a super intelligence would be motivated to solve this long term issue. This would be a hypothetical for a maximizing model until a solution comes about. This would of course promote the most unimaginable of hyper intelligence.

In the same instance, I’m wondering if a motivation space containing possibilities of motives could be employed with probabilistic logic to have some idea about what could be expected.

For instance Einstein’s model to “read the mind of God” was even more ambitious. His approach was mathematical; and it left him trying to interpret mathematical yields that he had no intuition for. I suppose that this is a possible rout; however attaining correct answers may not be accompanied with the ability to interpret them correctly… even with computer mediation.

Haahaa! I go there every day; and I wouldn’t have it any other way. :slight_smile:

1 Like