Thursday, February 10, 2011

AI in Designing ‘Intelligent’ Ship Autopilots


*   Introduction:
              Before I could start discussing about AI, we need to define ‘What is AI?’. Well there are many definitions of AI out of which the most modern one is ‘the science of designing a system that is organized into four categories - systems that think like humans, act like humans, think rationally & act rationally’.
Automatic steering of ships has been a goal of seafarers for many years. Imperatives such as reduced manning and increasing fuel costs has led to innovative designs from the classic PID(proportional–integral–derivative) controller to adaptive and robust control and latterly to intelligent control . The intention is to present an overview of autopilot development and to illustrate how the so-called intelligent paradigms of fuzzy logic and artificial neural networks have been employed for intelligent ship steering, along with their limitations.

* A brief history of ship autopilot evolution:
The process of automatic steering of ships has its origin several centuries ago to the time when early fishermen would bind the tiller or rudder of their boats in a fixed position, to produce an optimal course, in order to release extra manpower to assist with launching and recovery of nets. Most likely the criteria for selecting the optimal course would be to minimize induced motions and to maintain a course which would help the deployment or recovery of the nets. This was the case before Industrial revolution. It was not until after the revolution that methods for automatically steering of ships were first contemplated, and the first ship autopilots came into use during the first part of the twentieth century.

*   ‘Intelligent’ approaches to autopilot design:
The main difficulty in discussing intelligent approaches to autopilot design(in general, to any AI design) is that of defining what it is meant by intelligent or intelligent control and stems from the fact that there is no general agreement upon definitions for human intelligence and intelligent behaviour.
             One of the earliest definitions of the criterion for ‘machine intelligence’, is that by Turing who expressed this criterion as the well-known Turing test which is undertaken in the following manner: A person communicates with a computer through a terminal. When the person is unable to decide whether she/he is talking to a computer or another person, the computer can safely be said to possess all the important characteristics of intelligence
*.
The viewpoint adopted throughout is that intelligent control is the discipline that involves both artificial intelligence and control theory, the design of which should be based on an attempt to understand and duplicate some or part of the phenomena that ultimately produces a kind of behaviour that can be termed ‘intelligent’, i.e. generalisation, flexibility adaptation etc. The two paradigms which are generally accepted as matching or replicating these intelligent characteristics are fuzzy logic and artificial neural networks. Over the last twenty years or so there has been an explosion in interest in applying intelligent control to a wide range of application areas, including ship autopilot design with the motivation being their robustness qualities that can more effectively cope with the non-linear and uncertain characteristics of ship steering. Some of the approaches in this regard are-
Ø  Fuzzy logic approaches - The use of fuzzy set theory as a method for replicating the non-linear behaviour of an experienced helmsman is perhaps the most appropriate application of this technique. Fuzzy rules of the type: “IF heading error is positive small AND heading error rate is positive big THEN rudder angle is positive medium”, typify the actions of an experienced helmsman. The schematic of a fuzzy logic controller is shown in Fig. 1 where the conventional controller block is replaced by a composite block comprising four components:
• The rule base that holds a set of ‘‘if–then’’ rules which are quantified through appropriate fuzzy sets to represent the helmsmans knowledge.
• The fuzzy inference engine that decides which rules are relevant to a particular situation or input, and applies actions indicated by these rules.
• Input fuzzification, which converts the input value into a form that can be used by the fuzzy inference engine to determine which rules are relevant.
• Output defuzzification, which combines the conclusions reached by the fuzzy inference engine to produce the input rudder demand. It is these four blocks, acting together, which encapsulate the knowledge and experience of the helmsman.

Ø  Neural network approaches - Human brain comprises of approximately 10^11 neurons, each of which consists of a cell body, numerous fibres (dendrites) extending from the cell body and a long fibre (axon) which carries signals to other neurons. The axon branches into strands and sub-strands at the end of which are synapses that form the weighted connection between neurons (each making a few thousand synapses with other neurons) & hence process of sending a signal from one neuron to another is a complex electro-chemical process. However, the principle is that if the weighted sum of the inputs arriving at a cell in any time instant is above a threshold level the neuron ‘fires’ and sends a signal along its axon to the synapses with other neurons resulting of the complex neuron interconnection structure described above, data patterns representing a particular event will have unique propagation paths through the brain. The most widely used ‘feed forward’ layered network or multi-layer perception is an example of such a network (as in fig.2). Here the circles are the simulated neurons and the links represent the weighted connections, information flow is in one direction only. It should be noted that there are several neuron simulation models and a wide range of neuron interconnection structures. For example, recurrent networks are configured with internal connections that feedback to other layers or themselves.

Clearly the neural network shown in Fig. 2 varies considerably in size, structure and complexity from the biological neurons described above. Despite this fundamental difference such networks have been trained to approximate continuous nonlinear functions and have been used successfully in a wide range of applications. Training involves supervised learning, reinforcement learning or unsupervised learning. The former involves providing the network with two data patterns, the input pattern and its corresponding output pattern. This enables a function called the teacher to be derived which enables adjustment of the interconnection weights in order to minimise the error between the actual and the desired output. Reinforcement learning, or learning with a critic, works by deriving an error when an input target is not available for training. In this case the neural network obtains an error measure from an application dependant performance parameter, weight connections are adjusted and the network receives a reward/penalty signal. Training proceeds so as to maximise the likelihood of receiving further rewards and minimising the chance of penalty. With unsupervised learning there is no external error feedback signal to aid classification, as in pattern recognition applications. In this case the network is required to establish similarities and regularities in the input data sequence.
              Some approaches to neural network based autopilots use neural networks to mimic the actions of the existing autopilot. Such autopilots are trained by subjecting the neural network to wide range or operational conditions thereby enabling the neural network to provide satisfactory control action for conditions for which it has not been trained explicitly. Other approaches are to use neural network mappings which generate controller parameters and/or state estimation from measured system performance in order to provide direct adaptive control. Alternatively, the neural network is configures to produce a mapping which relates actual performance to an accurate set of model parameters i.e. indirect adaptive control.
Examples of indirect and directive adaptive control approaches for intelligent ship autopilots is given where a model reference adaptive control architecture is proposed, utilising neural networks to provide approximate mappings of the control and to implement the adaptive law (Fig. 3). Other combination of neural networks and fuzzy logic was proposed where fuzzy decision making is used for the adaptive law and also to filter (shape) the rudder demand which is necessary to prevent rudder actuator saturation (Fig. 4).
                                                          
Ø  Neurofuzzy approaches - The attractiveness of combining the transparent linguistic reasoning qualities of fuzzy logic with the learning abilities of neural networks to create intelligent self-learning controllers has, over the last decade, led to a wide range of applications. Such approaches have brought together the inherently robust and non-linear nature of fuzzy control with powerful learning methods through which the deficiencies of traditional fuzzy logic designs may be overcome. Many of the proposed fusions may be placed into one of two classes: either networks trained by gradient descent or reinforcement paradigms, although some methods combine these learning techniques. Whatever training method is chosen the parameters within the fuzzy controller which are to be ‘tuned’ must be selected.
              One of the most useful and much used combinations of neural networks and fuzzy logic is the Adaptive Network Based Fuzzy Inference System (ANFIS) proposed by Jang whereby, the fuzzy consequences of first-order Sugeno-type rules of the form:

        are tuned using neural networks. Fig. 5 shows the basic ANFIS architecture where the fuzzy inference engine comprises a layered, feedforward network with some of the parameters represented by adjustable nodes (rectangles) and fixed nodes (circles). Data enters the network at layer 1, the nodes containing the membership functions. The second layer combines the possible input membership grades and computes the firing strength of the rule. Layer 3 is a normalising function producing normalised firing strengths. Layer 4 contains the fuzzy consequences, the outputs of which are aggregated as a weighted sum in Layer 5.



* Stability considerations:
As discussed above, an intelligent autopilot can be seen as an advantageous combination of different control design techniques and searching algorithms that will lead to the attainment of the control objectives. As a consequence an intelligent controller is a highly non-linear, often time variant controller and therefore the stability analysis of such control systems is neither straightforward nor general as it is in the case of linear time invariant systems, where stability is a global and a well-defined property of the system. The difficulty in studying the analytical behaviours of combinations of neural networks and fuzzy logic based control systems has led to the practice of using extensive simulation trials to show the acceptable performances and the viability of the proposed intelligent control system.
              In the simplest control structure, neural networks and fuzzy logic systems are acting as non-linear elements in the loop. Therefore classical approaches, such as describing functions or the Popov criteria
**, can be applied to analyse the stability of such systems while more sophisticated techniques like the Lyapunov stability theory** can be used for both analysis and synthesis of non-linear control systems in terms of boundedness of control signal and tracking error.. Under a practical point of view, the approximation quality of the intelligent controller raises the questions on how many nodes of the networks and what kind of learning algorithms have to be used(inorder to prove that the error representing the non-linear plant dynamics, approximated by neural networks or fuzzy logic system, converges to zero).
              A comparison of different neural network structures, suggests that radial basis function neural networks with fixed input parameters are suitable for stability analysis while multilayer perceptron neural networks trained with the back-propagation algorithm due to their highly non-linear nature are not suitable for this purpose.
 

Concluding remarks:
From the overview of approaches to design of intelligent autopilots it is seen that such advanced control systems are often the result of an advantageous combination of different control design techniques that tries to act and make decisions like an experienced helmsmen. But, stability considerations issues associated with using intelligent paradigms for ship autopilots have been raised because in the unpredictable critical and emergency situations it cannot think like a human(of course!!) and hence it is suggested that further work on stability analysis of such intelligent control systems is vital only if they are to become accepted, particularly in certificated and safety critical applications.
But it is clear that in comparison to PID autopilots(i.e., with just the feedback network) intelligent  autopilots offer considerable improvements in performance and as such represent a viable alternative to PID designs. Also from above analysis, it is shown how well-known methods of non-linear analysis may be used to design a stable intelligent autopilot for ships. The functional equivalence between certain neural networks and fuzzy logic systems can be used to extend and generalise analytical results from one field to the other. However, until the stability issue of intelligent control is properly addressed and generic solutions formulated, these kinds of advanced control systems cannot be fully developed and will not gain acceptance(especially in certified and safety-critical applications).
              Future developments in intelligent autopilots will undoubtedly evolve in a similar way in which the intelligent control community is evolving where incorporation of ideas and methodologies based on learning control and intelligent decision making are gaining increasing popularity.



*This test clearly emphasises the external behaviours requested for a computer (or machine) to be safely defined as an intelligent machine. These external behaviours do not have to be distinguished from that of a human being and most importantly this has to happen from a human point of view (that is the observer).
**Popov criteria is used as non-linear feedback design tools for getting approximate solutions of an equation by comparing with the exact values & iterate till we reach a reasonable error to be neglected. Lyapunov theory is concerned with the stability analysis for giving an estimate of how quickly the solutions converge to a required value.
                  Both these methods are employed in case of non-linear & time-variant systems.

References:
[1] Bennet A. A history of control engineering 1800–1930. Peter Peregrinus Ltd; 1997.
[2] Fossen TI. A survey of non-linear ship control: From theory to practice. Proc. 5th IFAC conference
      on Manoeuvring and Control of Marine Craft (MCMC2000), 2000. p. 1–16.
[3] Allensworth T. A short history of Sperry Marine, 1999. http://www.sperry-marine.com/pages/history.html.
[4]Wikipedia : for defining PID controller, Popov criteria and Lyapunov theory.
[5]Introduction to AI(.ppt) by Prof.Dechter, University of California : for the definition of AI & Turing test.
[6] Chen FC, Khalil HK. Adaptive control of non-linear systems using neural networks.


                                                                                                                                     By,
                                                                                                                                    Syed Ashruf,
                                                                                                                                    AE09B025.

Robotics and AI with specific example of Chess Terminator

Firstly, let us see what AI is!
Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it.
AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.
John McCarthy, who coined the term in 1956, defines it as "the science and engineering of making intelligent machines."


The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.

Now about Robots...

Check this quote by Issac Asimov in his stories I, Robot.

"How old are you?" she wanted to know.

"Thirty-two," I said.

"Then you don't remember a world without robots. To you, a robot is a robot. Gears and metal; electricity and positrons. Mind and iron! Human-made! If
necessary, human-destroyed! But you haven't worked with them, so you don't know them. They're a cleaner better breed than we are."
    
                  - from I, Robot by Isaac Asimov

"In his series ,Issac Asimov depicts Robots as beneficial to society."

Today, robots are used in many ways, from lawn mowing to auto manufacturing. Scientists see practical uses for robots in performing socially undesirable,
hazardous or even "impossible" tasks --- trash collection, toxic waste clean-up, desert and space exploration, and more. AI researchers are also
interested in robots as a way to understand human (and not just human) intelligence in its primary function -- interacting with the real world

"Robots are comprised of several systems working together as a whole. The type of job the robot does dictates what system elements it needs. The general
categories of robot systems are: Controller, Body, Mobility, Power, Sensors, Tools."     -NASA JSC Learning Technologies Project

"To me what makes a robot a robot, and as with every definition you can poke it enough until it breaks, but for me it's something that senses the world in some way, does some sort of computation, deciding what to do, and then acts on the world outside itself as a result."
- Rodney Brooks, the director of the Massachusetts Institute of Technology computer science and artificial intelligence laboratory,


"We already live with many objects that are, in one sense, robots: the voice in a car’s Global Positioning System, for instance, which senses shifts in
its own location and can change its behavior accordingly. But scientists working in the field mean something else when they talk about sociable robots.
To qualify as that kind of robot, they say, a machine must have at least two characteristics. It must be situated, and it must be embodied. Being
situated means being able to sense its environment and be responsive to it; being embodied means having a physical body through which to experience the
world. A G.P.S. robot is situated but not embodied, while an assembly-line robot that repeats the same action over and over again is embodied but not
situated." - from The Real Transformers. By Robin Marantz Henig.

Now I will discuss about "Chess Terminator"

For almost as long as we've had computers, humans have been trying to make ones that play chess.(Even before we have computers ,way back to 1769, Von Kemplen built the automaton ,Chess Playing Turk).
The most famous chess-playing computer of course is IBM's Deep Blue,which in 1997 defeated the then World Champion Garry Kasparov.
But as powerful as Deep Blue was, it didn't actually move the chess pieces on its own. Perhaps that's a trivial task in comparison to beating the best chess player
of all-time, but still I was pleased to discover this recent video of a chess robot that more closely fits the true definition of a chess automaton.

The "Chess Terminator" was conceived by Konstantin Kosteniuk, the father and coach of Alexandra Kosteniuk, the current women's world champion. This robot is essentially a chess-playing robotic arm which can grasp pieces, move them to another square, and then press the chess timer to finish its move. The robot is
apparently quite energy efficient as well, as Kosteniuk has claimed that it can continue playing for 24 hours a day for three years straight.

As for how it works, it should be noted that the robot is not actually seeing the board, but rather is connected to it. As the pieces are fitted with sensors, the
robot can detect when they are moved, and responds appropriately. The hand portion of the robot is a three-prong system which can open and close to grasp and release pieces.

The Chess Terminator does have some flaws, however. Note that around the 2:45 mark Kramnik extends his hand offering a draw, but the robot – since it's not fitted with any kind of optical device – just keeps playing, very nearly taking off Kramnik's hand in the process!

The video is here, http://www.youtube.com/watch?v=fsRhTUPQfm4

Though Turk ,Chess Terminator seemed beyond the bounds of mechanism and thereby provoked mechanicians who were interested in testing the limits of their craft to become conjurers. As conjurers, though, they did something of genuine interest: they created machines that straddled the breach between the possible and the impossible.Where as machines like Chess Terminator with its mechanism combined with AI breached the distinctions between machine and human still furthur.

Though at some point of time machines like IBM's Deep Blue beat best chess players of all time.But as humans we stand far more better than mere machines, because the machines themselves need humans to made it, program it. Yes, they indeed can do more complex things than we can do, but We are the master mind behind all that.
So machines can never beat human in any way, though the gap between us is being constantly breached by machines.We can assured of that.

References:
1. http://www.aaai.org/aitopics/pmwiki/pmwiki.php/AITopics/Robots
2. http://en.wikipedia.org/wiki/Artificial_intelligence
3. http://robotzeitgeist.com/
4. http://www-robotics.jpl.nasa.gov/
5. http://www.gizmag.com/chess-terminator-robot-takes-on-kramnik-in-match/16996/

Abhilash Roy
CS09B012

AI in optical character recognition

Artificial intelligence (AI) is the intelligence of machines. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success.

There are different fields under optical character recognition.
Intelligent character recognition
Handwriting recognition
Automatic number plate recognition

In computer science, intelligent character recognition (ICR) is an advanced optical character recognition (OCR) or — rather more specific — handwriting recognition system that allows fonts and different styles of handwriting to be learned by a computer during processing to improve accuracy and recognition levels.
Most ICR software has a self-learning system referred to as a neural network, which automatically updates the recognition database for new handwriting patterns.
Because this process is involved in recognizing hand writing, accuracy levels may, in some circumstances, not be very good but can achieve 97%+ accuracy rates in reading handwriting in structured forms. Often to achieve these high recognition rates several read engines are used within the software and each is given elective voting rights to determine the true reading of characters.

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems

Neural network recognizers learn from an initial image training set. The trained network then makes the character identifications. Each neural network uniquely learns the properties that differentiate training images. It then looks for similar properties in the target image to be identified. Neural networks are quick to setup; however, they can be inaccurate if they learn properties that are not important in the target data.

Off-line handwriting recognition involves the automatic conversion of text in an image into letter codes which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting

On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching.

An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems

                                                             Character Recognition Process




                                                       
                                                                 By,
                                                                                                          G GOKUL KRISHNA
                                                                                         CH09B065




Disscussion of our understandings of life and of non-life....

Everything started with man producing the first complex machines which were automata (or self moving machines), by means of which he attempted to simulate nature and domesticate natural forces”
A point to highlight is the word “simulate” that’s where the birth of AI got triggered. 
 Opening our eyes, today we stand in a way far far far from the field of technology expected even by science fiction writers. Many might have not thought at some time in future a “human brain” can play chess with “non-living thing ”(or informally a box which can be kept in your bag) and even lose with it
 And there is here I am to say about a marvel in human history “speech recognition”
When we call most large companies, an automated voice recording answers and instructs you. Often you can just speak certain words (again, as instructed by a recording) to get what you need. The system that makes this possible is a type of speech recognition program -- an automated phone system.
 Now it had even entered in your computer..Wondered?!!..There is a reason to be so...ever imagined that a “thing” made up of plastic and wires would really identify any person voice and enter text on its own.?? An another example of engineering encouraging laziness.
It allows users to dictate to their computer and have their words converted to text in a word processing or email document. You can access function commands, such as opening files and accessing menus, with voice instructions.
People with disabilities that prevent them from typing have also adopted speech-recognition systems.  
These systems work best in a business environment where a small number of users will work with the program.The accuracy rate will fall drastically with any other user.
                                                        
i dont want to go in dept of how it works but just to mention the basics here it goes..
                                                                                  
To convert speech to on-screen text or a computer command, a computer has to go through several complex steps. When you speak, you create vibrations in the air. The analog-to-digital converter (ADC) translates this analog wave into digital data that the computer can understand.To do this, it samples the sound by taking precise measurements of the wave at frequent intervals. Also it handles different bands of frequency. People don't always speak at the same speed, so the sound must be adjusted to match the speed of the template sound samples already stored in the system's memory.

 
Now the most difficult part.The program examines phonemes in the context of the other phonemes around them. It runs the contextual phoneme plot through a complex statistical model and compares them to a large library of known words, phrases and sentences. The program then determines what the user was probably saying and either outputs it as text or issues a computer command.

But it’s not so easy as said Imagine someone from Boston saying the word "barn." He wouldn't pronounce the "r" at all, and the word comes out rhyming with "John." Or consider the sentence, "I'm going to see the ocean." Most people don't enunciate their words very carefully. The result might come out as "I'm goin' da see tha ocean." or "I'm goin'" and "the ocean." Rules-based systems were unsuccessful because they couldn't handle these variations. This also explains why earlier systems could not handle continuous speech -- you had to speak each word separately, with a brief pause in between them.
Today's speech recognition systems use powerful and complicated statistical modeling systems.

In Markov model, each phoneme is like a link in a chain, and the completed chain is a word. During this process, the program assigns a probability score to each phoneme, based on its built-in dictionary and user training.
the system has to figure out where each word stops and starts. The classic example is the phrase "recognize speech," which sounds a lot like "wreck a nice beach" when you say it very quickly. The program has to analyze the phonemes using the phrase that came before it in order to get it right. Here's a breakdown of the two phrases:
r  eh k ao g n ay  z       s  p  iy  ch
"recognize speech"
r  eh  k     ay     n  ay s     b  iy  ch
"wreck a nice beach"

There is some art into how one selects, compiles and prepares this training data for "digestion" by the system and how the system models are "tuned" to a particular application.
Why is this so complicated? If a program has a vocabulary of 60,000 words (common in today's programs), a sequence of three words could be any of 216 trillion possibilities. Obviously, even the most powerful computer can't search through all of them without some help. And theres where a boundry line is drawn between “life” and “non-life”
A computer has to do so many things which brain does in instants.. and infact interprets a meaning to the words spoken

No speech recognition system is 100 percent perfect; several factors can reduce accuracy.

The program needs to "hear" the words spoken distinctly, and any extra noise introduced into the sound will interfere can change how the system understands the word. The noise can come from a number of sources, including loud background. Users should work in a quiet room with a quality position of the microphone. Again where it draws the boundary of usage.. “should used in a very quiet manner “Low-quality soundcards can introduce hum or hiss into the signal
Running the statistical models needed for speech recognition requires the computer's processor to do a lot of heavy work. One reason for this is the need to remember each stage of the word-recognition search in case the system needs to backtrack to come up with the right word

Homonyms--a big problem
Homonyms are two words that are spelled differently and have different meanings but sound the same. "There" and "their," "air" and "heir," "be" and "bee" are all examples. There is no way for a speech recognition program to tell the difference between these words based on sound alone.
Way in the future..
Only in the 1990s did computers powerful enough to handle speech recognition become available to the average consumer. Current research could lead to technologies that are currently more familiar in an episode of "Star Trek." The Defense Advanced Research Projects Agency (DARPA) has three teams of researchers working on Global Autonomous Language Exploitation (GALE), It hopes to create software that can instantly translate two languages with at least 90 percent accuracy. "DARPA is also funding an R&D effort called TRANSTAC to enable our soldiers to communicate more effectively with civilian populations in non-English-speaking countries," adding that the technology will undoubtedly spin off into civilian applications, including a universal translator.


The following video shows both the usability and non usability of speech recognition


At some point in the future, speech recognition may become speech understanding.
Although it is a huge leap in terms of computational power and software sophistication, some researchers argue that speech recognition development offers the most direct line from the computers of today to true artificial intelligence. We can talk to our computers today. In 25 years, they may very well talk back. 

To the end i would say that there is a gap between life and non-life..but this gap is decreasing day by day and there might or might not be a day in future this gap exists or may vanish.whatever there is a difference in living and non-living...hope we can see a time where both define the "same meaning"




-----B.Vamshi
       EE09B104



Give feedback
click here
references
click here
click here
click here

Wednesday, February 9, 2011

Artifical Intelligence in Gaming


Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.
Since game AI is centered on appearance of intelligence and good gameplay, its approach is very different from that of traditional AI; hacks and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs' otherwise perfect aiming would be beyond human skill.
The many contemporary video games fall under the category of action, first person shooter, or adventure. In most of these types of games there is some level of combat that takes place. The AI's ability to be efficient in combat is important in these genres. A common goal today is to make the AI more human, or at least appear so.
One of the more positive and efficient features found in modern day video game AI is the ability to hunt. AI originally reacted in a very black and white manner. If the player was in a specific area then the AI would react in either a complete offensive manner or be entirely defensive. In recent years, the idea of "hunting" has been introduced; in this 'hunting' state the AI will look for realistic markers, such as sounds made by the character or footprints they may have left behind. These developments ultimately allow for a more complex form of play. With this feature, the player can actually consider how to approach or avoid an enemy. This is a feature that is particularly prevalent in the stealth genre.
Another development in recent game AI has been the development of "survival instinct". In-game computers can recognize different objects in an environment and determine whether it is beneficial or detrimental to its survival. Like a user, the AI can "look" for cover in a firefight before taking actions that would leave it otherwise vulnerable, such as reloading a weapon or throwing a grenade. There can be set markers that tell it when to react in a certain way. For example, if the AI is given a command to check its health throughout a game then further commands can be set so that it reacts a specific way at a certain percentage of health. If the health is below a certain threshold then the AI can be set to run away from the player and avoid it until another function is triggered. Another example could be if the AI notices it is out of bullets, it will find a cover object and hide behind it until it has reloaded. Actions like these make the AI seem more human. However, there is still a need for improvement in this area. Unlike a human player the AI must be programmed for all the possible scenarios. This severely limits its ability to surprise the player.


Making Computer Chess Scientific. By John McCarthy. "I complained in my Science review  of Monty Newborn's Deep Blue vs. Kasparov that the tournament oriented work on computer chess was not contributing as much to the science of AI as it should. AI has two tools for tackling problems. One is to use methods observed in humans, often observed only by introspection, and the other is to invent methods using ideas of computer science without worrying about whether humans do it this way. Chess programming employs both. Introspection is an unreliable way of determining how humans think, but introspectively suggested methods are valid as AI if they work.Much of the mental computation done by chess players is invisible to the player and to outside observers. Patterns in the position suggest what lines of play to look at, and the pattern recognition processes in the human mind seem to be invisible to that mind. However, the parts of the move tree that are examined are consciously accessible.It is an important advantage of chess as a Drosophila for AI that so much of the thought that goes into human chess play is visible to the player and even to spectators."


"Alexander Kronrod, a Russian AI researcher, said 'Chess is the Drosophila of AI.' He was making an analogy with geneticists' use of that fruit fly to study inheritance. Playing chess requires certain intellectual mechanisms and not others. Chess programs now play at grandmaster level, but they do it with limited intellectual mechanisms compared to those used by a human chess player, substituting large amounts of computation for understanding. Once we understand these mechanisms better, we can build human-level chess programs that do far less computation than do present programs. Unfortunately, the competitive and commercial aspects of making computers play chess have taken precedence over using chess as a scientific domain. It is as if the geneticists after 1910 had organized fruit fly races and concentrated their efforts on breeding fruit flies that could win these races."




Mastering the Game: A History of Computer Chess. An online exhibit from the Computer History Museum. "The history of computer chess is a five-decade long quest to solve a difficult intellectual problem. The story starts in the earliest days of computing and reflects the general advances in hardware and software over this period. This on-line exhibition contains documents, images, artifacts, oral histories, moving images and software related to computer chess from 1945 to 1997





How Chess Computers Work. By Marshall Brain for HowStuffWorks. "If you were to fully develop the entire tree for all possible chess moves, the total number of board positions is about 1, 000, 000, 000, 000, 000,000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, 000, or 10120, give or take a few. That's a very big number. For example, there have only been 1026 nanoseconds since the Big Bang. There are thought to be only 1075 atoms in the entire universe. When you consider that the Milky Way galaxy contains billions of suns, and there are billions of galaxies, you can see that that's a whole lot of atoms. That number is dwarfed by the number of possible chess moves. Chess is a pretty intricate game! No computer is ever going to calculate the entire tree. What a chess computer tries to do is generate the board-position tree five or 10 or 20 moves into the future."

Ch09b071
R.Surender Naik

Monday, February 7, 2011

Artificial Intelligence in Video Games

Artificial intelligence (AI) is the intelligence of machines and the branch of computer science that aims to create it. AI textbooks define the field as "the study and design of intelligent agents" where an intelligent agent is a system that perceives its environment and takes actions that maximize its chances of success
Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters (NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI).Since game AI is centered on appearance of intelligence and good gameplay, its approach is very different from that of traditional AI,the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPCs' otherwise perfect aiming would be beyond human skill.

There are many types of computer programs that use AI. Market
simulators, logic systems, and economic planners are some of the different fields
of computer software that rely heavily on elements of artificial intelligence. These
elements include situation calculus, tree searching, problem solving, and
decision-making. But one genre of software programming has been slowly
borrowing more and more from the field of AI is video gaming.
Video games have gone through drastic improvements in the past ten
years. It seems as if Moore’s law applies to video games as well as processor
speed. Video games seem to get twice as complex in some ways every eighteen
months. As these games get more complex they also get more interesting and
engaging. Video games are no longer just a distraction from work or a thirtyminute
escape from reality. They are becoming an artistic form of expression for
the programmers and developers and a serious hobby and undertaking for the
players.
Physicist Willy Higinbotham created the first video game in 1958. It was
called “Tennis For Two” and was played on an oscilloscope. The first game to
run on a computer was “Spacewar” by Steve Russell from MIT. The graphics
were ASCII characters and it ran on a Digital PDP-1 mainframe.
The history of artificial intelligence in video games dates back to the midsixties.
Before that, games were either two-player (meaning that there was no
computer opponent) or any non-human objects were hard-coded. An example of
hard-coded video game objects are the little aliens in “Space Invaders” that
swoop down at the human player. The human player must shoot the aliens
before they reach the bottom of the screen. The manner in which these aliens
move is explicitly coded into the game and is not determined at runtime by any
precepts. The earliest real artificial intelligence in gaming was the computer
opponent in “Pong” or variations thereof (of which there were many). The
computer paddle would do its best to block the ball from scoring by hitting it back
at the user. Determining where to move the paddle was accomplished by a
simple equation that would calculate at exactly what height the ball would cross
the goal line and move the paddle to that spot as fast as allowed. Depending on
the difficulty setting, the computer might not move fast enough to get to the spot
or may just move to the wrong spot with some probability

For a long time no video game AI was not much more intelligent the
“Pong” AI. This was because the games were relatively simple and most often
played with a second player instead of a computer opponent. Atari sports games
AI agents were basically goal-oriented towards scoring points and governed by
simple rules that controlled when to pass, shoot, or move. The advent of fighting
games such as “Kung Foo” for Nintendo or “Mortal Kombat” for Sega Genesis
saw only a slight improvement in AI. The moves of the computer opponents
were determined by what each player was currently doing and where they were
standing. In the most basic games, there was simply a lookup table for what was
currently happening and the appropriate best action. In the most complex cases,
the computer would perform a short minimax search of the possible state space
and return the best action. The minimax search had to be of short depth since
the game was occurring in real-time. Real-time game play has always been
major setback for AI in video games. There is very little time to compute actions
and possible future states when the action never stops.


Chinook is a computer program that plays English draughts (also known as checkers), developed around 1989 at the University of Alberta, led by Jonathan Schaeffer. Other developers are Rob Lake, Paul Lu, Martin Bryant, and Norman Treloar. In July 2007, Chinook's developers announced that the program has been improved to the point where it cannot lose a game.
Chinook's program algorithm includes an opening book, a library of opening moves from games played by grandmasters; a deep search algorithm; a good move evaluation function; and an end-game database for all positions with eight pieces or fewer. The linear handcrafted evaluation function considers several features of the game board, including piece count, kings count, trapped kings, turn, runaway checkers (unimpeded path to be kinged) and other minor factors. All of Chinook's knowledge was programmed by its creators, rather than learned with artificial intelligence
The best games adapt to the player to provide an entertaining experience. But handling adaptation in immersive 3D worlds isn’t trivial! In terms of AI, the challenge is to build a system that:
1. Requires little work to script each possible adaptation,
2. Allows you to control the outcome to prevent unrealistic situations


-
N.HARDEV
CH09B066

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Powered by Blogger | Printable Coupons