Imagine the applications and implications of robotics and artificial intelligence in 2045

Machine learning pathologies

Describe your idea

Machine learning constructs will evolve autonomously. These constructs will, over time, develop pathologies similar to humans: neuroses, insecurities, psychoses, even counterproductive competition with other constructs or humans.

Top improvement

1 year ago
Echo said
1
Open Close
One thing that we have to keep in mind is the inhe [...]
One thing that we have to keep in mind is the inherent synthetic nature and malleability of computer systems. We could engineer pathologies to infect cyber systems.

This raises the question, if we drive an enemy's cyber system insane, who is liable for the damage that it does.

We should also keep in mind that cyber pathologies might parallel human pathologies or might be fundamentally different. We could be looking at inflict obsessive compulsive disorder or just send commands for a never ending do-loop.
Add Improvement

What makes this idea new or different?

Will fundamentally alter the relationship between humans and machines.

Top improvement

1 year ago
jah27 said
1
Open Close
At the strategic-level, the US DoD uses net assess [...]
At the strategic-level, the US DoD uses net assessment as one of its tool to diagnose strategic challenges. As Paul Bracken describes it, the net assessment approach emphasizes that "strategic interactions are shaped by the complex sprawling organizations that break big problems into manageable smaller ones... The decomposition, and recomposition back into a coordinated policy, is universal, and it was net assessment that first saw the distortions that arose from how this problem factoring was done."

In reading Bracken's description of an approach to strategic assessment, it is easy to see that the use machine learning enabled tools and AI to formulate policy and strategy will require an additional layer that understands - perhaps evaluates and exploits - machine learning pathologies and other consistent irregularities that may arise when humans start outsourcing portions of their thinking to machines.
Add Improvement

What will be the implications of this idea?

Motive and context will be essential to interpreting construct outputs. These pathologies will be an exploitable weakness, just as in humans.

Top improvement

1 year ago
jah27 said
1
Open Close
Another implication is that we could improve speci [...]
Another implication is that we could improve specific aspects of perception, judgment, or decision making performance by offsetting human fallibilities with machines that compensate for those fallibilities; likewise, the human advantages could offset machine fallibilities. This is currently manifesting in areas like radiology, where the complementary advantages of machine and human pattern recognition have been linked to improve overall performance.
Add Improvement

What other players said

1 year ago - jah27 said
What will be the implications of this idea?
0
Open Close
One implication is that we may be able to leverage [...]
One implication is that we may be able to leverage research from the last 20-30 years to start to uncover some of the pathologies. Work in cybernetics and newer research in computational neuroeconomics, where researchers have used computers to model human cognitive processes, may serve as a foundation to reveal how these pathologies could arise.
1 year ago - jah27 said
Describe your idea
0
Open Close
One of the important aspects of manifesting this i [...]
One of the important aspects of manifesting this idea would be describing the decision environments for which the machines/AIs were designed, as the mismatch between the decision task and the environment can sometimes account for the judgment error or pathology. This update to the spark recognizes a school of thought in the field of judgment and decision making in which the 'ecological rationality' of the decision tool is the measure of its fitness.
1 year ago - nmk47 said
What will be the implications of this idea?
0
Open Close
It will also be interesting to see how the human/r [...]
It will also be interesting to see how the human/robot relationship develops pathological/toxic interactions. What would it mean to be codependent or to be a bully or a toxic leader, etc., when one party is a machine? Will special therapists have to be cross-trained for programming and human counseling?
1 year ago - jah27 said
What makes this idea new or different?
1
Open Close
At the strategic-level, the US DoD uses net assess [...]
At the strategic-level, the US DoD uses net assessment as one of its tool to diagnose strategic challenges. As Paul Bracken describes it, the net assessment approach emphasizes that "strategic interactions are shaped by the complex sprawling organizations that break big problems into manageable smaller ones... The decomposition, and recomposition back into a coordinated policy, is universal, and it was net assessment that first saw the distortions that arose from how this problem factoring was done."

In reading Bracken's description of an approach to strategic assessment, it is easy to see that the use machine learning enabled tools and AI to formulate policy and strategy will require an additional layer that understands - perhaps evaluates and exploits - machine learning pathologies and other consistent irregularities that may arise when humans start outsourcing portions of their thinking to machines.
1 year ago - jah27 said
What will be the implications of this idea?
1
Open Close
Another implication is that we could improve speci [...]
Another implication is that we could improve specific aspects of perception, judgment, or decision making performance by offsetting human fallibilities with machines that compensate for those fallibilities; likewise, the human advantages could offset machine fallibilities. This is currently manifesting in areas like radiology, where the complementary advantages of machine and human pattern recognition have been linked to improve overall performance.
1 year ago - Echo said
Describe your idea
1
Open Close
One thing that we have to keep in mind is the inhe [...]
One thing that we have to keep in mind is the inherent synthetic nature and malleability of computer systems. We could engineer pathologies to infect cyber systems.

This raises the question, if we drive an enemy's cyber system insane, who is liable for the damage that it does.

We should also keep in mind that cyber pathologies might parallel human pathologies or might be fundamentally different. We could be looking at inflict obsessive compulsive disorder or just send commands for a never ending do-loop.

Advanced to the Marketplace.

View this idea
0

When will this idea make a difference?

Your vote Average vote Total votes
Now
2030
2040
2050
2060+

Inspired by

Machine Psychology
View Spark
© SciTech Futures 2017. All Rights Reserved.

Site Login