Fri, May 31, 2019 4:51:01 PM

Introduction to Objects 31�<br/>
Where is the data for an object and how is the lifetime of the object controlled? C++ takes the <br/>
approach that control of efficiency is the most important issue, so it gives the programmer a <br/>
choice. For maximum runtime speed, the storage and lifetime can be determined while the <br/>
program is being written, by placing the objects on the stack (these are sometimes called <br/>
automatic or scoped variables) or in the static storage area. This places a priority on the <br/>
speed of storage allocation and release, and this control can be very valuable in some <br/>
situations. However, you sacrifice flexibility because you must know the exact quantity, <br/>
lifetime, and type of objects while you�re writing the program. If you are trying to solve a <br/>
more general problem such as computer-aided design, warehouse management, or air-traffic <br/>
control, this is too restrictive. <br/>
The second approach is to create objects dynamically in a pool of memory called the heap. In <br/>
this approach, you don�t know until run time how many objects you need, what their lifetime <br/>
is, or what their exact type is. Those are determined at the spur of the moment while the <br/>
program is running. If you need a new object, you simply make it on the heap at the point <br/>
that you need it. Because the storage is managed dynamically, at run time, the amount of <br/>
time required to allocate storage on the heap can be noticeably longer than the time to create <br/>
storage on the stack. Creating storage on the stack is often a single assembly instruction to <br/>
move the stack pointer down and another to move it back up. The time to create heap storage <br/>
depends on the design of the storage mechanism. <br/>
The dynamic approach makes the generally logical assumption that objects tend to be <br/>
complicated, so the extra overhead of finding storage and releasing that storage will not have <br/>
an important impact on the creation of an object. In addition, the greater flexibility is <br/>
essential to solve the general programming problem. <br/>
Java uses dynamic memory allocation, exclusively.7 Every time you want to create an object, <br/>
you use the new operator to build a dynamic instance of that object. <br/>
There�s another issue, however, and that�s the lifetime of an object. With languages that allow <br/>
objects to be created on the stack, the compiler determines how long the object lasts and can <br/>
automatically destroy it. However, if you create it on the heap the compiler has no knowledge <br/>
of its lifetime. In a language like C++, you must determine programmatically when to destroy <br/>
the object, which can lead to memory leaks if you don�t do it correctly (and this is a common <br/>
problem in C++ programs). Java provides a feature called a garbage collector that <br/>
automatically discovers when an object is no longer in use and destroys it. A garbage <br/>
collector is much more convenient because it reduces the number of issues that you must <br/>
track and the code you must write. More importantly, the garbage collector provides a much <br/>
higher level of insurance against the insidious problem of memory leaks, which has brought <br/>
many a C++ project to its knees. <br/>
With Java, the garbage collector is designed to take care of the problem of releasing the <br/>
memory (although this doesn�t include other aspects of cleaning up an object). The garbage <br/>
collector �knows� when an object is no longer in use, and it then automatically releases the <br/>
memory for that object. This, combined with the fact that all objects are inherited from the <br/>
single root class Object and that you can create objects only one way�on the heap�makes <br/>
the process of programming in Java much simpler than programming in C++. You have far <br/>
fewer decisions to make and hurdles to overcome. <br/>
Exception handling: dealing with errors <br/>
Ever since the beginning of programming languages, error handling has been a particularly <br/>
difficult issue. Because it�s so hard to design a good error-handling scheme, many languages <br/>
simply ignore the issue, passing the problem on to library designers who come up with <br/>
������������������������������������������������������������<br/>
7 Primitive types, which you�ll learn about later, are a special case. <br/>
halfway measures that work in many situations but that can easily be circumvented, generally <br/>
by just ignoring them. A major problem with most error-handling schemes is that they rely <br/>
on programmer vigilance in following an agreed-upon convention that is not enforced by the <br/>
language. If the programmer is not vigilant�often the case if they are in a hurry�these <br/>
schemes can easily be forgotten. <br/>
Exception handling wires error handling directly into the programming language and <br/>
sometimes even the operating system. An exception is an object that is �thrown� from the site <br/>
of the error and can be �caught� by an appropriate exception handler designed to handle that <br/>
particular type of error. It�s as if exception handling is a different, parallel path of execution <br/>
that can be taken when things go wrong. And because it uses a separate execution path, it <br/>
doesn�t need to interfere with your normally executing code. This tends to make that code <br/>
simpler to write because you aren�t constantly forced to check for errors. In addition, a <br/>
thrown exception is unlike an error value that�s returned from a method or a flag that�s set by <br/>
a method in order to indicate an error condition�these can be ignored. An exception cannot <br/>
be ignored, so it�s guaranteed to be dealt with at some point. Finally, exceptions provide a <br/>
way to reliably recover from a bad situation. Instead of just exiting the program, you are <br/>
often able to set things right and restore execution, which produces much more robust <br/>
programs. <br/>
Java�s exception handling stands out among programming languages, because in Java, <br/>
exception handling was wired in from the beginning and you�re forced to use it. It is the <br/>
single acceptable way to report errors. If you don�t write your code to properly handle <br/>
exceptions, you�ll get a compile-time error message. This guaranteed consistency can <br/>
sometimes make error handling much easier. <br/>
It�s worth noting that exception handling isn�t an object-oriented feature, although in object-<br/>
oriented languages the exception is normally represented by an object. Exception handling <br/>
existed before object-oriented languages. <br/>
Concurrent programming <br/>
A fundamental concept in computer programming is the idea of handling more than one task <br/>
at a time. Many programming problems require that the program stop what it�s doing, deal <br/>
with some other problem, and then return to the main process. The solution has been <br/>
approached in many ways. Initially, programmers with low-level knowledge of the machine <br/>
wrote interrupt service routines, and the suspension of the main process was initiated <br/>
through a hardware interrupt. Although this worked well, it was difficult and non-portable, <br/>
so it made moving a program to a new type of machine slow and expensive. <br/>
Sometimes, interrupts are necessary for handling time-critical tasks, but there�s a large class <br/>
of problems in which you�re simply trying to partition the problem into separately running <br/>
pieces (tasks) so that the whole program can be more responsive. Within a program, these <br/>
separately running pieces are called threads, and the general concept is called concurrency. <br/>
A common example of concurrency is the user interface. By using tasks, a user can press a <br/>
button and get a quick response rather than being forced to wait until the program finishes <br/>
its current task. <br/>
Ordinarily, tasks are just a way to allocate the time of a single processor. But if the operating <br/>
system supports multiple processors, each task can be assigned to a different processor, and <br/>
they can truly run in parallel. One of the convenient features of concurrency at the language <br/>
level is that the programmer doesn�t need to worry about whether there are many processors <br/>
or just one. The program is logically divided into tasks, and if the machine has more than one <br/>
processor, then the program runs faster, without any special adjustments. <br/>
All this makes concurrency sound pretty simple. There is a catch: shared resources. If you <br/>
have more than one task running that�s expecting to access the same resource, you have a <br/>
32 Venkata Pilaka Venkata Pilaka <br/>

Comments

Popular posts from this blog

termux vnc viewer setup

../Settings.jpg

me.html