Sunday, December 4, 2011

Array Basic (Unit - I ) DS


Abstract
Precise modeling of the program heap is fundamental for understanding the behavior of a program, and is thus of significant interest for many optimization applications. One of the fundamental properties of the heap that can be used in a range of optimization techniques is the sharing relationships between the elements in an
array or collection. If an analysis can determine that the memory locations pointed to by different entries of an array (or collection) are disjoint, then in many cases loops that traverse the array can be vectorized or transformed into a thread-parallel version. This paper introduces several novel sharing properties over the concrete heap and corresponding abstractions to represent them. In conjunction with an existing shape analysis technique, these abstractions allow us to precisely resolve the sharing relations in a wide range of heap structures (arrays, collections, recursive data structures, composite heap structures) in a computationally efficient manner. The effectiveness of the approach is evaluated on a set of challenge problems
from the JOlden and SPECjvm98 suites. Sharing information obtained from the analysis is used to achieve substantial thread-level parallel speedups.


Introduction


The transformation of foreach style traversals on arrays/collections of scalars via vectorization and scheduling optimizations has had a major impact in improving the performance of optimized programs,
and thread–level parallelization of these loops is becoming increasingly important with the proliferation of multi–core processors. The ability to apply these transformations in modern object-oriented
programs that make heavy use of pointer structures and where arrays/collections often contain references to other heap structures (instead of scalar values) is severely limited by the difficulty of reasoning precisely about the structure of the program heap and the sharing relations between the pointers stored in these collections.
A significant amount of work has been devoted to the problem of shape analysis, which strives to provide precise information about the connectivity/reachability properties of the heap. A major
focus of this work has been on accurately modeling the construction and update of recursive data structures such as lists or trees [2, 11, 14, 15, 17, 18], while recent work explored how these recursive
structures are connected to form composite heap structures [2, 14]. These results are important steps in the development of a generalpurpose heap analysis technique and allow for the parallelization
of recursive data structure traversals, but they do not adequatel capture the sharing (or lack thereof) between the entries in a given
array or collection.
Take the simple program segment shown in Figure 1, which manipulates arrays of Data objects, each containing a single integer
field val. The first loop fills the array AU with a number of fresh
Data objects, and thus there is no aliasing between the entries in
the array. The second loop fills the array AS with objects selected
from the first array via some indexing function f(i), which we
assume cannot be understood by the analysis (e.g., f is a complex
non–linear transform or uses some form of randomization). Therefore, each entry in AS may potentially alias with another entry in
the array.



Base Heap Model


To analyze a program we first transform the Java 1.4 source into
a core sequential imperative language called MIL (Mid-level Intermediate Language). The MIL language is statically typed, has method invocations, conditional constructs (if, switch), exception handling (try-throw-catch) and the standard looping statements (for, do, while). The state modification and expressions cover the standard range of program operations: load, store, and assignment along with logical, arithmetic, and comparison operators. This mid-level language allows us to support the majority
of the Java 1.4 language while substantially simplifying the analysis. During this transformation step we also load in our specialized
standard library implementations, so we can analyze programs that use classes from java.util, java.lang or java.io. The semantics of the language is defined in the usual way,
using an environment mapping variables into values, and a store, mapping addresses into values. We refer to the environment and the store together as the concrete heap. We model the concrete heap as a labeled, directed multi-graph (N;E) where each node n 2 N is an object in the store or a variable in the environment,  and each labeled directed edge e 2 E represents a reference (a pointer between objects or a variable reference). Each edge is given a label that is either an identifier from the program or an integer i 2 N (the
integers label the pointers stored in the arrays/collections). For an edge (a; b) 2 E labeled with p, the notation a p ! b indicates that the object/variable a points to b via the field name or identifier p . A region of the heap  is a subset of the objects, with all the  pointers that connect these objects and all the cross-region pointers
that start or end at an object in this region .



Basic Properties
Our base abstract heap domain  is a directed graph in which each node represents a region of the heap or a variable, and each edge represents a set of pointers or a variable target. The nodes and edges are augmented with additional instrumentation properties: Types. Each non–variable node in the graph represents a region of
the heap (which may contain objects of many types). To track the types of these objects we use a set of type names as part of the label of each node. This set contains the type of any object that may be in the region of the heap that is abstracted by the given node.


Linearity. To model the number of objects abstracted by a given node (or pointers by an edge) we use a linearity property which has two possible values: 1, which indicates that the node (edge)
concretizes to either 0 or 1 objects (pointers), and the value w, which indicates that the node (edge) concretizes to any number of objects (pointers) in the range.

Abstract Layout. To approximate the shape of the region a node abstracts, the analysis uses the abstract layout properties f(S)ingleton, (L)ist, (T)ree, (M)ultiPath, (C)ycleg. The (S)ingleton
property states that there are no pointers between any of the objects abstracted by the node. The (L)ist property states that each object has at most one pointer to another object in the region. The other
properties correspond to the standard definitions for trees, DAGs, and cycles.
Pictorially, we represent abstract heaps as labeled, directed multi-graphs. The variable nodes are labeled with the name of the variable they represent. Nodes abstracting concrete regions are denoted as [id, type, linearity, layout]; the first field (id) contains a unique identifier, while the rest correspond to the predicates described above. The abstract edges, which approximate sets of pointers, are represented in the figures as records [id, offset, linearity].
The offset component indicates the label of the references that are abstracted by the edge: a field identifier, a variable name when the edge connects a variable and a node, or the special label ? denoting
the summary field for all the elements in an array or a collection object .




Abstract Sharing
To model the concrete properties just defined, we introduce two instrumentation predicates, one to track relations between references which are abstracted by different edges in the model (connectivity),
and one to track the relations between references which are abstracted by the same edge in the model (interference).
Connectivity. Given the relation predicates in the concrete regions
we can define a series of connectivity properties, connectivity = fshare; connected; disjointg, on the edges in the abstract domain. The concrete region  and the sets of references R, R
0 are a valid concretization of the edges e and e 0 To represent the connectivity property in the figures we extend the label of every edge e with a list of the identifiers of the other edges in the graph that it has a share or connected relation with. If an edge identifier e 0 in this list is prefixed with a “!” then
share(e; e 0 ) holds, while if there is no prefix then they are related by the connected predicate; if an edge identifier e 0 does not appear in the list, we will assume that disjoint(e; e 0 ) is true.



Interference. The interference property is closely related to the concept of connectivity. While the latter tracks relation predicates between references that are abstracted by different graphedges, the interference property tracks relation predicates between references that are abstracted by the same graph edge. Given
the definitions for the relation predicates in the concrete regions we can define a series of interference predicates, interference = faliasing(ap);interfering(ip); non–interfering(np)g, on the edges in
the abstract domain. The concrete region  and the set of references R are a valid concretization of edge e.



A recursive algorithm calls itself which usually passes the return value as a parameter to the algorithm again. This parameter is the input while the return value is the output.


Recursive algorithm is a method of simplification that divides the problem into sub-problems of the same nature. The result of one recursion is the input for the next recursion. The repletion is in the self-similar fashion. The algorithm calls itself with smaller input values and obtains the results by simply performing the operations on these smaller values. Generation of factorial, Fibonacci number series are the examples of recursive algorithms.


 Tail Recursion: It is a situation where a single recursive call is consisted by a function, and it is the final statement to be executed. It can be replaced by iteration.




Recursive functions

Many mathematical functions can be defined recursively:
  • factorial
  • Fibonacci
  • Euclid's GCD (greatest common denominator)
  • Fourier Transform
Many problems can be solved recursively, eg games of all types from simple ones like the Towers of Hanoi problem to complex ones like chess. In games, the recursive solutions are particularly convenient because, having solved the problem by a series of recursive calls, you want to find out how you got to the solution. By keeping track of the move chosen at any point, the program call stack does this housekeeping for you! This is explained in more detail later.


Example: Factorial


One of the simplest examples of a recursive definition is that for the factorial function:
factorial( n ) = if ( n = 0 ) then 1
                 else n * factorial( n-1 )
A natural way to calculate factorials is to write a recursive function which matches this definition:
function fact( int n )
 {
 if ( n == 0 ) return 1;
 else return n*fact(n-1);
 }
Note how this function calls itself to evaluate the next term. Eventually it will reach the termination condition and exit. However, before it reaches the termination condition, it will have pushed n stack frames onto the program's run-time stack.
The termination condition is obviously extremely important when dealing with recursive functions. If it is omitted, then the function will continue to call itself until the program runs out of stack space - usually with moderately unpleasant results!

Failure to include a correct termination condition in a recursive function is a recipe for disaster!
Another commonly used (and abused!) example of a recursive function is the calculation of Fibonacci numbers. Following the definition:
fib( n ) = if ( n = 0 ) then 1
           if ( n = 1 ) then 1
           else fib( n-1 ) + fib( n-2 )
one can write:
function fib( int n )
 {
 if ( (n == 0) || (n == 1) ) return 1;
 else return fib(n-1) + fib(n-2);
 }
Short and elegant, it uses recursion to provide a neat solution - that is actually a disaster! We shall re-visit this and show why it is such a disaster later.
Data structures also may be recursively defined. One of the most important class of structure - trees - allows recursive definitions which lead to simple (and efficient) recursive functions for manipulating them.
But in order to see why trees are valuable structures, let's first examine the problem of searching.

       
                                                            Backtracking



Backtracking is a general algotith for finding all (or some) solutions to some computational problem, that incrementally builds candidates to the solutions, and abandons each partial candidate c ("backtracks") as soon as it determines that c cannot possibly be completed to a valid solution.
The classic textbook example of the use of backtracking is the eight queens puzzle, that asks for all arrangements of eight queens on a standard chessboard so that no queen attacks any other. In the common backtracking approach, the partial candidates are arrangements of k queens in the first k rows of the board, all in different rows and columns. Any partial solution that contains two mutually attacking queens can be abandoned, since it cannot possibly be completed to a valid solution.
Backtracking can be applied only for problems which admit the concept of a "partial candidate solution" and a relatively quick test of whether it can possibly be completed to a valid solution. It is useless, for example, for locating a given value in an unordered table. When it is applicable, however, backtracking is often much faster than brute force enumeration of all complete candidates, since it can eliminate a large number of candidates with a single test.
Backtracking is an important tool for solving constraint satisfaction problems , such as crosswords, verbal arithmetic, Sudoku, and many other puzzles. It is often the most convenient (if not the most efficient technique for parsing, for the knapsack  and other combinatorial optimization problems. It is also the basis of the so-called logic programming languages such as Icon, Planner and Prolog. Backtracking is also utilized in the (diff) difference engine .
Backtracking depends on user-given "black box procedures" that define the problem to be solved, the nature of the partial candidates, and how they are extended into complete candidates. It is therefore a metaheuristic rather than a specific algorithm — although, unlike many other meta-heuristics, it is guaranteed to find all solutions to a finite problem in a bounded amount of time.


Array ( DS)


In computer science, an array data structure or simply array is a data structure consisting of a collection of elements (values or variables), each identified by at least one index. An array is stored so that the position of each element can be computed from its index  by a mathematical formula.
For example, an array of 10 integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index i has the address 2000 + 4 × i.
Arrays are analogous to the mathematical concepts of vectors, matrices, and tensors. Indeed, arrays with one or two indices are often called vectors or matrices, respectively. Arrays are often used to implement tables, especially lookup tables; the word table is sometimes used as a synonym of array.
Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially In computer science, an array data structure or simply array is a data structure  consisting of a collection of elements (values or variables), each identified by at least one index. An array is stored so that the position of each element can be computed from its index  by a mathematical formula.
For example, an array of 10 integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index i has the address 2000 + 4 × i.
Arrays are analogous to the mathematical concepts of vectors, matrices, and tensors. Indeed, arrays with one or two indices are often called vectors or matrices, respectively. Arrays are often used to implement tables, especially lookup table; the word table is sometimes used as a synonym of array.
Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processor especially vector processors, are often optimized for array operations.
Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually, but not always,[ fixed while the array is in use.
The term array is often used to mean  array data types, a kind of data type provided by most high-level programming languages that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables , linked lists, search trees , or other data structures.
The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of arrays.
In Computer science, an array data structure or simply array is a data structure consisting of a collection of elements (values or variables), each identified by at least one index. An array is stored so that the position of each element can be computed from its index  by a mathematical formula.[
For example, an array of 10 integer variables, with indices 0 through 9, may be stored as 10 words at memory addresses 2000, 2004, 2008, … 2036, so that the element with index i has the address 2000 + 4 × i.
Arrays are analogous to the mathematical concepts of vectors, matrices, and tensors. Indeed, arrays with one or two indices are often called vectors or matrices, respectively. Arrays are often used to implement tables, especially lookup table; the word table is sometimes used as a synonym of array.
Arrays are among the oldest and most important data structures, and are used by almost every program. They are also used to implement many other data structures, such as lists and strings. They effectively exploit the addressing logic of computers. In most modern computers and many external storage devices, the memory is a one-dimensional array of words, whose indices are their addresses. Processors, especially vector processors, are often optimized for array operations.
Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually, but not always, fixed while the array is in use.
The term array is often used to mean array data type, a kind of data type provided by most high-level programming that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables,linked lists, search trees, or other data structures.
The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical model (an abstract data types or ADT) intended to capture the essential properties of arrays.
, are often optimized for array operations.
Arrays are useful mostly because the element indices can be computed at run time. Among other things, this feature allows a single iterative statement to process arbitrarily many elements of an array. For that reason, the elements of an array data structure are required to have the same size and should use the same data representation. The set of valid index tuples and the addresses of the elements (and hence the element addressing formula) are usually, but not always, fixed while the array is in use.
The term array is often used to mean array data type, a kind of data type provided by most high-level programming that consists of a collection of values or variables that can be selected by one or more indices computed at run-time. Array types are often implemented by array structures; however, in some languages they may be implemented by hash tables,linked lists, search trees, or other data structures.
The term is also used, especially in the description of algorithms, to mean associative array or "abstract array", a theoretical computer science model (an abstract data type or ADT) intended to capture the essential properties of arrays.



Monday, June 20, 2011

IMPORTANT QUESTIONS *(Practical & viva)

IMPORTANT QUESTIONS
BASIC COMPUTER ENGINEERING (B.E.-205)

1. What is computer?
2. Explain input and output devices?
3. Explain what is DBMS?
4. What is computer networking?
5. What is LAN, wan, man?
6. Explain topologies?
7. What is operating system?
8. What is c++ (with history)?
9. Explain various data types in c++ (with ranges)?
10. Expalin keywords?
11. Difference while loop and do while?
12. Write down structure of for loop?
13. What is switch case?
14. Explain if and else?
15. What is Array?
16. What is structure?
17. What is class and objects?
18. What are the features of loops?
19. What are constructors and destructors?
20. What is function overloading and operator overloading?

IMPORTANT QUESTIONS FOR RGPV EXAMINATION

IMPORTANT QUESTIONS FOR RGPV EXAMINATION
BASIC COMPUTER ENGINEERING (B.E.-205)
UNIT-1

1. Explain classification of computer on the basis of (generation), (type analog hybrid). Also prepare block diagram?
2. Explain 5 input and 5 output devices?
3. Explain differences between application software and system software with example?
4. Explain various types of memory?
5. Explain address bus, data bus, expansion bus and control lines?
6. Explain following terms?
(I)Computer Ethics. (ii)E-Business. (iii)Bio-Informatics.(iv)Computer Gaming.(v)computer in multimedia and animation
UNIT-2
1. What is operating system? Explain functions of operating system?
2. Explain various types of operating system?
3. What is file? Explain various file operations?
4. Explain generations of programming languages?
5. Differences between procedure oriented programming (pop) and object oriented programming (oops)?
6. Writedown features and merits of oops?
7. Explain differences between compiler, interpreter and assembler?
UNIT-3
1. Explain object and classes with example?
2. Explain constructors and destructors?
3. What is over loading functions? Explain types of overloading functions?
4. What is Inheritance?
5. Write short note on following
(i)structure (ii)Array (iii) for loops and while loop
(iv) Character set, keywords operator (v)data types (vi) function (vii) pointers

UNIT-4
1. What is DATA BASE MANAGEMENT SYSTEM (DBMS)? Explain architecture of DBMS?
2. Differences between file oriented system and data base system?
3. Write short note on following.
(i)Data dictionary. (ii)DBA (iii) Data independence. (iv)DDL and DML. (v)Primary key.
Unit-5
1. What is computer networking? Explain its advantage?
2. Differences between TCP/IP Model and OSI Model?
3. Explain various networking devices?
4. Write short note on following.
(i) WWW (ii) Network security. (iii)E-commerce. (iv) Topologies (vi) client server model (vii)EDI
5. Explain function of various layers
6. Explain LAN, WAN, MAN?

Thursday, January 13, 2011

C & C++

C and C++
Dennis Ritchie developed C and it was quite popular. An interesting feature in C is the use of functions. The programmer could write a function for checking whether a number is odd or even and store it in a separate file. In the program, whenever it is needed to check for even numbers, the programmer could simply call that function instead of rewriting the whole code. Usually a set of commonly used functions would be stored in a separate file and that file can be included in the current project by simply using the #include syntax. Thus the current program will be able to access all functions available in ‘filename’. Programs written in C were more structured compared to high level languages and another feature was the ability to create your own data types like structures. For instance if you want to create an address book program, you will need to store information like name and telephone number. The name is a string of characters while the telephone number is an integer number. Using structures you can combine both into one unit. Similarly there are many more advantages of using C.
Though C seemed to be ideal, it was not effective when the programs became even more complex (or larger). One of the problems was the use of many functions (developed by various users) which led to a clash of variable names. Though C is much more efficient than BASIC, a new concept called Object Oriented Programming seemed better than C. OOP was the basis of C++ (which was initially called ‘C with classes’). C++ was developed by Bjarne Strastroup. In object oriented programming, the programmer can solve problems by breaking them down into real-life objects (it presented the programmer with an opportunity to mirror real life). What is an object? This topic is dealt with extensively in the chapter on ‘Objects and Classes’ but a brief introduction is provided here.
Consider the category of cars. All cars have some common features (for example all cars have four wheels, an engine, some body colour, seats etc.). Are all cars the same? Of course not. A Fiat and a Ford aren’t the same but they are called as cars in general. In this example cars will form a class and Ford (or Fiat) will be an object.
For those people who know C programming, it would be useful to know the differences between C and C++. Basically C++ includes everything present in C but the use of some C features is deprecated in C++.
• C does not have classes and objects (C does not support OOP)
• Structures in C cannot have functions.
• C does not have namespaces (namespaces are used to avoid name collisions).
• The I/O functions are entirely different in C and C++ (ex: printf( ), scanf( ) etc. are part of the C language).
• You cannot overload a function in C (i.e. you cannot have 2 functions with the same name in C).
• Better dynamic memory management operators are available in C++.
• C does not have reference variables (in C++ reference variables are used in functions).
• In C constants are defined as macros (in C++ we can make use of ‘const’ to declare a constant).
• Inline functions are not available in C.
Let’s recap the evolution of programming languages: initially programs were written in terms of 1s and 0s (machine language). The drawback was that the process was very tedious and highly error-prone. Assembly language was developed to write programs easily (short abbreviations were used instead of 1s and 0s). To make it even simpler for programmers, high level languages were developed (instructions were more similar to regular English). As the complexity of programs increased, these languages were found to be inadequate (because they were unstructured). C was developed but even that was not capable of dealing with complex or larger programs. This led to the development of C++.
Note: Sometimes languages are divided into low level and high level only. In such a classification, C/C++ will come under high level languages.
Why do you need to learn C++?
There are many people who ask the question, "why should I learn C++? What use is it in my field?" It is a well-known fact that computers are used in all areas today. Programming is useful wherever computers are used because it provides you the flexibility of creating a program that suits your requirements. Even research work can be simulated on your computer if you have programming knowledge. Construction and programming may appear to be miles apart but even a civil engineer could use C++ programming. Consider the case of constructing a physical structure (like a pillar) in which the civil engineer has to decide on the diameter of the rods and the number of rods to be used. There are 2 variables in this case:
1. The number of rods needed (let’s denote it as ‘n’) and
2. The diameter of each rod (let’s call it as ‘d’)
The civil engineer might have to make a decision like: "Is it cost-effective for me to have 10 rods of 5cm diameter or is it better to have 8 rods of 6cm diameter?" This is just one of the simple questions he may have in his mind. There are a few related questions: "What is the best combination of number of rods, their diameters and will that combination be able to handle the maximum stress?"
Usually equations are developed for each of the factors involved. It would be much easier if the civil engineer could simply run a program and find out what is the best combination instead of manually trying out random values and arriving at a solution. This is where programming knowledge would benefit the engineer. Any person can write a good program if he has adequate knowledge about the domain (domain refers to the area for which the software is developed. In this case it is construction). Since the civil engineer has the best domain knowledge he would be able to write a program to suit his requirements if he knew programming.
Programming is applicable to almost every field- banking (for maintaining all account details as well as the transactions), educational institutions (for maintaining a database of the students), supermarkets (used for billing), libraries (to locate and search for books), medicine, electrical engineering (programs have been developed for simulating circuits) etc.
Binary Numbering System
________________________________________
The following 2 sections on binary numbering system and memory are optional but recommended. If you’ve taken a course in electronics you probably already know about this and can use the material provided here as a refresher. Having knowledge of the binary system and computer memory will help in understanding some features in programming.
________________________________________
The heart of the computer is the microprocessor, which is also referred to as the processor. The microprocessor is the main part of the computer’s CPU (central processing unit). A processor consists of millions of electronic switches; a switch can be either in ON or OFF state. Thus there are only two distinct states possible in these devices. In our real-world calculations we make use of the decimal system (which has 10 distinct states/numbers: 0 to 9). Counting up to ten is easy for humans but would be quite difficult for computers. It is easier to create a device capable of sensing 2 states than one capable of sensing 10 states (this also helps reduce on errors). Computers make use of the binary system (i.e. they store data in binary format and also perform calculations in binary format). The binary system (binary meaning two) has just two numbers: 1 and 0 (which correspond to the ON and OFF state respectively – this is analogous to a switch which can either be in ON state or in OFF state).
When we have different systems (binary, decimal etc.), there ought to be a way of converting data from one system to the other. In the decimal system the number 247 stands for 7*100 + 4*101 + 2*102 (add it up and the result will be 247; i.e. in each place we can have one of the 10 digits and to find the actual place value we have to multiply the digit by the corresponding power of 10). For example:
247 = (2 x 102) + (4 x 101) + (7 x 100) = 200 + 40 + 7
1258 = (1 x 103) + (2 x 102) + (5 x 101) + (8 x 100) = 1000 + 200 + 50 + 8
Note: In C++ and most other computer languages, * is used as the multiplication operator.
The same concept holds good for a binary number but since only two states are possible, they should be multiplied by powers of 2.
Remember: A binary digit is called a bit.
So, what is the value of 1101? Multiply each position by its corresponding power of 2 (but remember, you have to start from 20 and not from 21). The value for 1101 is 13 as illustrated in the figure below:

An alternate method to obtain the value is illustrated below (but the underlying concept is the same as above):

It is easy to obtain the values which are written above each bit (27=128, 26=64 and so on). Write these values on top and then write the binary number within the squares. To find the equivalent decimal value, add up the values above the square (if the number in the square is 1). If a number is denoted as 1101, then this stands for the lower (or last) four bits of the binary number (the upper bits are set to 0). Hence 1101 will come under the values 8, 4, 2 and 1. Now, wherever there is a 1, just add the value above it (8+4+1=13). Thus 13 is the decimal equivalent of 1101 (in binary format). To distinguish between decimal and binary we usually represent the system used (decimal or binary) by subscripting the base of the system (10 is the base for the decimal system while 2 is the base for the binary system).
Hence (13)10 = (1101)2

Computers store information in the form of bits and 8 bits make a byte. But memory capacity is expressed as multiples of 210 bytes (which is equal to 1024 bytes). 1024 bytes is called a Kilobyte. You may wonder why it is 1024 and not 1000 bytes. The answer lies in the binary system. Keeping uniformity with the binary system, 210=1024 and not 1000 (the idea is to maintain conformity with the binary system).
Beware: The bit in position 7 in fig 1.1 is actually the 8th bit of the number (the numbering of bit starts from 0 and not 1). The bit in the highest position is called as the most significant bit. In an 8 bit number, the 7th bit position is called the most significant bit (MSB) and the 0th bit is known as the least significant bit (or LSB). This is because the MSB in this case has a value of 128 (28) while the LSB has a value of just 1.
________________________________________
Computer Memory
________________________________________
We know that computers can operate only on bits (0s and 1s). Thus any data that has to be processed by the computer should be converted into 0s and 1s.
Let us suppose that we want to create a text file containing a single word “Hello”. This file has to be stored physically in the computer’s memory so that we can read the file anytime in the future. For the time being forget about the file-storage part. Let’s just concentrate on how the word “hello” is stored in the computer’s memory. Computers can only store binary information; so how will the computer know which number corresponds to which alphabet? Obviously we cannot map a single bit to a character. So instead of bits we’ll consider a byte (8 bits). Now we can represent 256 characters. To perform map a character to a byte we’ll need to use some coding mechanism. For this purpose the ASCII (American Standard Code for Information Interchange) is used. In this coding system, every alphabet has an equivalent decimal value. When the computer uses ASCII, it cannot directly use the decimal value and it will convert this into an 8-bit binary number (in other words, into a byte) and store it in memory.
The following table shows part of the ASCII code.
Character Equivalent decimal value Binary value
A 65 0100 0001
B 66 0100 0010
a 97 0110 0001
b 98 0110 0010
In this way each character is mapped to a numeric value. If we type the word hello, then it is converted into bytes (5 bytes- one for each character) based on the ASCII chart and is stored in memory. So ‘hello’ occupies 5 bytes or 40 bits in memory.
Note: It is very important to know the binary system if you want to use the bitwise operators available in C++. The concept of memory is useful while learning pointers.
A question arises, “where are the individual bits stored in memory?” Each individual bit is stored in an electronic device (the electronic device is technically called a flip-flop; which is something like a switch). A single flip-flop can store one bit. Consider the fig. below:

As mentioned earlier we deal in terms of bytes rather than bits. The figure shows a 4-byte memory (which means it can hold 32 bits – each cell can store one bit). All information is stored in memory and the computer needs some method to access this data (i.e. there should be some way of distinguishing between the different memory locations). Memory addresses serve this purpose. Each bit can be individually accessed and has a unique memory address. To access the first byte, the computer will attempt to read the byte stored at memory address 1.
If memories didn’t have addresses then the computer would not know from where it has to read data (or where it has to store data). An analogy to memory address is the postal address system used in real-life. A city will have a number of houses and each house has a unique address. Just imagine the situation if we didn’t have any postal address (we wouldn’t be able to locate any house in the city!). One difference is that a memory address can house only bits and nothing else.
Memory address representations are not as simple as shown above. In the above case we’ve considered a memory that has capacity to store just 4 bytes. Memories usually contain kilobytes or gigabytes of space. As the amount of memory available increases, so does the size of the address (the address number might be in the range of millions). Thus instead of using the decimal system for addresses, the hexadecimal numbering system is used. Hexa means 16 and the distinct numbers in this system are 0 to 9 followed by A, B, C, D, E and F where A= 10 in decimal, B= 11 in decimal and F= 15 in decimal.
Counting in hexadecimal system will be 0…9, A, B…F, 10,11,12,13…19,1A, 1B, 1C…and so on.
It is quite clear that 10 in hexadecimal does not equal 10 in decimal. Instead, (0F)16 = (15)10, (10)16 = (16)10 and (11)16 = (17)10
Four bits form a ‘nibble’. Eight bits form a ‘byte’ and four bytes of memory are known as a ‘word’. There is some significance attached to a ‘word’ which we shall deal with later.
Another term related to memory is ‘register’. A register consists of a set of flip-flops (flip-flops are electronic devices that can store one bit) for storing information. An 8-bit register can store 8 bits. Every processor has a set of internal registers (i.e. these registers are present within the processor and not in an external device like a hard disk). These registers (which are limited in number depending on the processor) are used by the processor for performing its calculations and computations. They usually contain the data which the processor is currently using. The size of the register (i.e. whether the registers will be 16-bit, 32-bit or 64-bit registers also depends on the processor). The common computers today use 32-bit registers (or 4-byte registers). The size of a register determines the ‘word-size’ of a computer. A computer is more comfortable (and prefers) working with data that is of the word-size (if the word-size is 4 bytes then the computer will be efficient in computations involving blocks of 4 byte data). You’ll understand this when we get into data types in the subsequent chapters.
The different types of memory are:
1. Secondary storage (for example the hard disk, floppy disks, magnetic disks, CD-ROM) where one can store information for long periods of time (i.e. data is retained in memory irrespective of whether the system is running or not).
2. The RAM (random access memory) is used by the computer to store data needed by programs that are currently running. RAM is used for the main (primary) memory of the computer (all programs which are executed need to be present in the main memory). The RAM will lose whatever is stored in memory once the computer is switched off.
3. ROM (read only memory): This contains instructions for booting up the system and performing other start-up operations. We cannot write to this memory.
4. The internal registers within the processor- these are used by the computer for performing its internal operations. The compiler will decide what has to be stored in which register when it converts our high-level code into low-level language. As such we won’t be able to use these registers in our C++ code (unless we write assembly code).
Remember: Secondary storage (also called auxiliary memory) is not directly accessible by the CPU. But RAM is directly accessible (thus secondary memory is much slower than the primary memory). For a program to execute it needs to be present in the main memory.
Which memory has lowest access time (or which memory can the CPU access quickly)?
The internal registers can be accessed quickly and the secondary storage devices take much longer to access. Computers also make use of a cache-memory. Cache memory is a high-speed memory which stores recently accessed data (cache access is faster than main memory access).

Related concept: In general cache means storing frequently accessed data temporarily in a place where it can be accessed quickly. Web browsers tend to cache web pages you visit frequently on the hard disk. Generally when we type in a website address, the browser needs to query the website and request for the page; which is a time consuming process. When the browser displays this webpage, it internally caches (stores a copy) this webpage on our hard disk also. The next time we time the same website address, the browser will directly read out from the hard disk rather than query the website (reading from hard disk is faster than accessing a web server).
Remember: Computers store information using the binary system, addresses are represented in the hexadecimal system and in real-life we use the decimal system.
A 3-bit number can be used to form 8 different combinations (including the 000 combination).
Binary Decimal Equivalent
000 0
001 1
010 2
011 3
100 4
101 5
110 6
111 7
If 3 bits are used then the maximum possible decimal number that can be represented is 7 (not 8 because the first number is 0). Similarly if an 8-bit number can be used to represent up to (2^8) different values (0 to 255 in the decimal system).
A binary number can be either signed or unsigned. When a number is unsigned (we don’t bother about the sign), it means that the number is always positive. If a number is signed, then it could be positive or negative. +33 is a signed number (with a positive sign). In real life, we can use + or – to indicate whether a number is positive or negative but in computers only 1s and 0s can be used. Every decimal number has a binary equivalent. The binary equivalent for the decimal number 8 is 00001000. If this is a signed number then +8 would be written as 00001000 and –8 would be denoted by 10001000. Notice the difference between the two. The 7th bit (or the most significant bit) is set to 1 to indicate that the number is negative.
Assume the number 127.
For +127 you will write: 01111111
For –127 you will write: 11111111
For 255 you will write: ?
Well, the value for 255 cannot be written using a signed 8-bit number (because the 7th bit is reserved for the sign). If the unsigned representation was used then 255 can be represented as 11111111 (in this case the MSB signifies a value and is not concerned about the sign of the number).
What is the point to note here? By using a signed representation the maximum value that can be represented is reduced. An 8 bit unsigned binary number can be used to represent values from 0 to 255 (11111111 will mean 255). On the other hand, if the 8 bit binary number is a signed number then it can represent from –127 to +127 only (again a total of 255 values but the maximum value that can be represented is only 127).
Beware: Signed representation in binary format will be explained in detail later. Computers store negative numbers in 2s complement rather than storing them directly as shown above.
Remember: In signed numbers the MSB (in the binary representation) is used to indicate the sign of the number.

Your very first C++ Program
________________________________________
All the example programs in this book have been tested in Turbo C++ compiler (the programs were also tested in Visual C++ 6.0). They should run on other C++ compilers as well.
Let us start with a basic C++ program. This will give you an idea about the general structure of a C++ program. Let’s write a program in which the user can enter a character and the program will display the character that was typed:
// My first C++ program
# include
int main( )
{
char letter;
cout << "Enter any letter" ; cin >>letter;
cout << "The letter you entered is : " <, it will physically include the iostream header file in your source code (i.e. the entire file called iostream.h file will be pasted in your source code). This is one of the standard header files which you’ll use in almost all of your C++ programs.

• int main( ) : Every C++ program has to have one ‘main’ function. Functions will be explained later. Remember that the compiler will execute whatever comes within the ‘main’ function. Hence the instructions that have to be executed should be written within the ‘main’ function. Just as the name implies, ‘main’ is the main part of the C++ program and this is the entry point for a C++ program.
• { : Functions are defined within a block of code. Everything within the opening and closing braces is considered a block of code. ‘main’ is a function and hence we define this function within braces. This is as good as telling the compiler, “The ‘main’ function starts here”.
• char letter; : ‘letter’ is the name of a variable. Instead of the name ‘letter’ we could also use any other name. ‘char’ defines ‘letter’ as a character variable. Therefore, the variable ‘letter’ will accept the value of only one character.
• cout << "Enter any letter" ; : This is followed by the << operator (known as the insertion operator). Following this operator, anything typed within double quotes will be displayed on the screen. Remember that cout and << go together. ‘cout’ is known to the compiler already (since it is defined in the iostream header file which has been included). So when the compiler comes across cout<<, it knows what to do. • cin >>letter;: cin is the opposite of cout. cin>> is used to obtain the value for a variable from the user. (>> is known as the extraction operator). Input is usually obtained from the keyboard.
• return 0; : This statement tells the compiler that the ‘main’ function returns zero to the operating system (return values will be discussed in the chapter on functions).
• }: The closing brace is used to tell the compiler that ‘this is the end of the function’. In our case, it is the end of the ‘main’ function and this indicates the end of the program as well.
You might have noticed in the above program that every statement is terminated with a semi-colon (;). That is exactly the use of the semi-colon. It is used to tell the compiler that one instruction line is over. If you don’t put semi-colons in your program, you will get errors while compiling.
A variable is used for temporary storage of some data in a program. In the above example, ‘letter’ was a variable. Every variable belongs to a particular type (referred to as data type). The different data types in C++ are discussed later. In the above program:
char letter;
declares ‘letter’ as a character variable (a character is one of the basic data types in C++). When the compiler encounters this statement, it will allocate some memory space for this variable (i.e. any value which is assigned to the variable ‘letter’ will be stored in this allocated memory location). When the required memory space has been allocated we say that the variable has been defined (in this case a single statement will declare and define the variable ‘letter’).
________________________________________
Some points to remember:
When we want to display something on the screen we will code it as:
cout<>variable-name;
Beware of the direction of the >> and << operators. In the statement cout<>variable;
information (the value of the variable) flows from ‘cin’ (which would be the keyboard) into the variable (i.e. the value is stored in the variable). Information will flow in the direction of the arrows (<< or >>). ‘cout’ is linked to the standard display device (i.e. the monitor) while ‘cin’ is linked to the standard input device (i.e. the keyboard). Hence, you can’t use
cout>>variable;
This will cause an error while compiling. ‘cout’ and ‘cin’ are already pre-defined and so you can use them directly in your programs. 'iostream.h’ is a header file that is used to perform basic input and output operation (or general I/O like using ‘cin’ and ‘cout’).
Remember: C++ is a case-sensitive language, which means that the compiler will not consider ‘letter’, ‘LETTER’ and ‘Letter’ as the same.
How to run your first program in the compiler?
Saving and compiling the program:
There are many C++ compilers available in the market. Some of them are freeware (meaning that they are free to use) while others have to be paid for. Turbo C++ compiler might be the simplest to use (but it is not freeware). Simply choose "New File" and type out the program coding. Suppose you are using some other compiler, click on File and choose "New". Some compilers may have the option of creating a C++ source file while other compilers may require a new project to be created. Whatever the method you will ultimately come to the step of creating a C++ source file. After typing the code in the compiler, save the file by giving it some name. The "Save As" option will appear under the "File" menu. Give a name (for example: first). Now the file is saved as first.cpp. All C++ source files are saved in *.cpp format. Just like *.doc represents a Word document file, *.cpp denotes a C++ (C Plus Plus) source file.
In the compiler program there will be an option called ‘Compile’ in the menu bar. Select the compile option and the compiler will do its work. It will compile the program (or in other words, it will read whatever has been typed) and in case there are any errors, the compiler will point out the line where the error was detected. Check whether the program has been typed exactly as given earlier. Even if a semi-colon is missing, it will lead to errors. If the compiler says no errors (or if the message "compiled successfully" appears), then you can go to the next stage: building the *.exe file.
*.exe file extension stands for executable files. A *.cpp file cannot be run directly on the computer. This has to be converted into a *.exe file and to do so select the "Make or Build exe" option in the compiler. The file ‘first.exe’ will be created. Now the program can be executed from the DOS prompt by typing ‘first’. But instead of running the program every time from DOS, it will be convenient to run the program from the compiler itself and check the output. In the compiler there will be another option called "Run". Just click on this and the program will run from the compiler itself (you needn’t switch back and forth between DOS and the compiler screen).
The figure should give you a rough idea as to how the executable file is created from the C++ source code.


The source code is the program that you type. Source code is converted into object code by the compiler. The linker will link your object code with other object codes. This process of creating a C++ program is discussed in the last chapter.
Modification that might be needed in first.cpp (depending on your compiler):
A little problem might be encountered while running your program from the compiler (this problem will exist in Turbo C++ compiler). While running the program from the DOS prompt, this problem will not occur.
What’s the problem? When first.cpp is executed from the compiler the program will ask the user to enter a character. Once a character has been entered, the program will return to the compiler screen. You won't see your output! It might appear as if there is some problem with the program. What happens is that the program displays the character on the screen, immediately terminates the program and returns to the compiler screen. This happens so fast that you can’t see the output being displayed.
Modify your program as shown below (if you are using Turbo C++):
// Your first program modified: first.cpp
# include
# include
int main( )
{
char letter;
cout<< "Enter any letter" ; cin>>letter;
cout<< "The letter you entered is : " <>letter;
at the end of the program just before return 0;
The program flow will be the same as described earlier.
The latest compilers, like VC++ (Microsoft Visual C++ compiler) do not have any of the above problems even if the program is run from the compiler. VC++ will always ask the user to press a character to terminate the program.
Another alternative is to use a function called ‘system’, which is defined, in the header file: stdlib.h.
• system("PAUSE");
can be used to pause the program (i.e. execution of the program will continue only when the user presses a key).
• system("CLS");
can be used to clear the display screen. To use these two functions you have to type #include header file in your source code. The system( ) function actually executes a DOS command. (Try giving the commands ‘cls’ and ‘pause’ in your DOS prompt).

Saturday, August 14, 2010

C++

                                                                     C++
C++ is an "object oriented" programming language created by Bjarne Stroustrup and released in 1985. It implements "data abstraction" using a concept called "classes", along with other features to allow object-oriented programming. Parts of the C++ program are easily reusable and extensible; existing code is easily modifiable without actually having to change the code. C++ adds a concept called "operator overloading" not seen in the earlier OOP languages and it makes the creation of libraries much cleaner.
C++ maintains aspects of the C programming language, yet has features which simplify memory management. Additionally, some of the features of C++ allow low-level access to memory but also contain high level features.
C++ could be considered a superset of C. C programs will run in C++ compilers. C uses structured programming concepts and techniques while C++ uses object oriented programming and classes which focus on data