rotw
Mechanical
- May 25, 2013
- 1,143
It is my understanding that artificial intelligence is based on various type of algorithms and that there are a variety of technologies currently competing in the industry, examples: artificial neural networks, genetic algorithms, fuzzy logic, etc. and combination thereof.
Now if I consider a basic example, such as artificial neural networks (e.g. classification problems, image recognition), it is known that these systems are subject to a learning phase (calibration to a certain set of data), once calibrated it can be exploited in industrial applications. So if an intelligent system is trained on a set of data it is expected that the system would extract a certain pattern or signal/information within that set data set. When applied outside of the training data set, subject to a certain limits, the system would return some sort of exploitable information.
Now I would like to take a very simplistic situation.
If we know the response of a system to A and B according to a certain pattern (say we have a linear and continuous transfer function), then at for a certain condition C (somewhere between A and B), we know that the corresponding response would be bounded and it shall be within system responses to A and B. If the behavior is very slightly non linear, we would expect a non linear response in accordance (not a change of order of magnitude for example), so forth and so on.
Where I am heading to is this, if we use an artificial intelligence system to predict or anticipate a response to a certain condition, can there be a "mathematical proof" that the information returned remains "bounded". In other words, what warranty do we have that any arbitrary system would not behave in a completely unexpected manner in terms of output when the input itself is within the operability limits set for the artificial intelligence system?
This is an excuse to ask a more broad question (and sorry if its very generic), nowadays we hear a lot about artificial intelligence systems which are quite present in our daily life, devices etc., so is there a mathematical foundation that underlies the design and deployment of these systems or is this all "trusted" based only on empirical validation ?
For the experts, please forgive my ignorance on the subject. This is more to trigger a discussion and learn, thanks.
I also posted in this section of the forum considering it is a future technology trend, but Ok please feel free to correct if this is inadequate.
Now if I consider a basic example, such as artificial neural networks (e.g. classification problems, image recognition), it is known that these systems are subject to a learning phase (calibration to a certain set of data), once calibrated it can be exploited in industrial applications. So if an intelligent system is trained on a set of data it is expected that the system would extract a certain pattern or signal/information within that set data set. When applied outside of the training data set, subject to a certain limits, the system would return some sort of exploitable information.
Now I would like to take a very simplistic situation.
If we know the response of a system to A and B according to a certain pattern (say we have a linear and continuous transfer function), then at for a certain condition C (somewhere between A and B), we know that the corresponding response would be bounded and it shall be within system responses to A and B. If the behavior is very slightly non linear, we would expect a non linear response in accordance (not a change of order of magnitude for example), so forth and so on.
Where I am heading to is this, if we use an artificial intelligence system to predict or anticipate a response to a certain condition, can there be a "mathematical proof" that the information returned remains "bounded". In other words, what warranty do we have that any arbitrary system would not behave in a completely unexpected manner in terms of output when the input itself is within the operability limits set for the artificial intelligence system?
This is an excuse to ask a more broad question (and sorry if its very generic), nowadays we hear a lot about artificial intelligence systems which are quite present in our daily life, devices etc., so is there a mathematical foundation that underlies the design and deployment of these systems or is this all "trusted" based only on empirical validation ?
For the experts, please forgive my ignorance on the subject. This is more to trigger a discussion and learn, thanks.
I also posted in this section of the forum considering it is a future technology trend, but Ok please feel free to correct if this is inadequate.