We meet yet again through my blog and this time I'm gonna talk about my eight assignment from Mr. Tri Djoko Wahjono, Ir, M.Sc. It is about "Statement-Level Control Structures", the eighth chapter from Sebesta's Programming Language Concepts book, so i answered 5 questions from the review questions and 5 questions from the problem sets. So here is my answer for them :
Review Questions :
1. What is the definition of control structure?
A control structure is a control statement and the collection of statements whose execution it controls.
2. What did Böhm and Jocopini prove about flowcharts?
They had proven that all algorithms that can be expressed by flowcharts can be coded in a programming languages with only two control statements: one for choosing between two control flow paths and one for logically controlled iterations.
3. What is the definition of block?
Block is a sequence of code delimited by either braces or the do and end reserved words.
4. What is/are the design issue(s) for all selection and iteration control statements?
Selection :
Two-way :
What is the form and type of the expression that controls the selection ?
How are the then and else clauses specified ?
How should the meaning of nested selectors be specified ?
Multiple-Selection :
On which type the selector is based ?
Iteration :
How is the iteration controlled ?
Where should the control mechanism appear in loop statement?
5. What are the design issues for selection structures?
What is the form and type of the expression that controls the selection ?
How are the then and else clauses specified ?
How should the meaning of nested selectors be specified ?
Problem Sets :
1. Describe three situations where a combined counting and logical looping statement is needed.
Three situations in which a combined counting and logical control loops are:
A list of values is to be added to a SUM, but the loop is to be exited if SUM exceeds some prescribed value.
A list of values is to be read into an array, where the reading is to terminate when either a prescribed number of values have been read or some special value is found in the list.
The values stored in a linked list are to be moved to an array, where values are to be moved until the end of the linked list is found or the array is filled, whichever comes first.
2. Study the iterator feature of CLU in Liskov et al. (1981) and determine its advantages and disadvantages.
The key addition was the concept of a cluster, CLU’s type extension system and the root of the language’s name CLUster. Clusters correspond generally to the concept of an “object” in an OO language, and have roughly the same syntax.
CLU did not offer any sort of structure for the clusters themselves. Cluster names are global, and no namespace mechanism was provided to group clusters or allow them to be created “locally” inside other clusters. This problem is not unique to CLU, but it is surprising that so many languages have lacked this feature — given the centralness in ALGOL of giving scope to variables, it seems that giving scope to cluster/object names would be an obvious extension.
CLU does not perform implicit type conversions. In a cluster, the explicit type conversions ‘up’ and ‘down’ change between the abstract type and the representation. There is a universal type ‘any’, and a procedure force[] to check that an object is a certain type.
Another key feature of the CLU type system are iterators, which return objects from a collection one after the other. Iterators were “black boxes” that offered an identical API no matter what data they were being used with. Thus the iterator for a collection of complex_numbers would be identical to that for an array of integers.
CLU also includes exception handling, based on various attempts in other languages; exceptions are raised using signal and handled with except. Oddly, given the focus on type design, CLU does not offer enumerated types, nor any obvious way to create them.
A final distinctive feature in CLU is multiple assignment, where more than one variable can appear on the left hand side of an assignment operator
3. Compare the set of Ada control statements with those of C# and decide which are better and why.
C# because the C# multiple selection structures is a great boost to C# writability, with no obvious negatives, furthermore C# control statement is the most flexible iteration statement.
4. What are the pros and cons of using unique closing reserved words on compound statements?
Pros: Readability. When one sees an end-if or end-while in a program written by someone else, it is clear which block is ending.
Cons: Writability. The use of these reserved words means that there are more reserved words. More reserved words means less words available to the programmer. It also means more words to learn.
5. What are the arguments, pro and con, for Python’s use of indentation to specify compound statements in control statements?
Pros of indentation:
Helps reduce inconsistent indentation in code which makes it easier to read (in other words consistency)
clears the screen by replace visible tokens with whitespace to serve the same purpose
Cons of indentation:
Much easier to cut and paste code to different levels (you don’t have to fix the indentation)
More consistent. Some text editors display whitespace(s) differently.
You cannot safely mix tabs and spaces in Python such that it would be easy to cause an error by putting too few spaces in an indentation level, thus going to the previous indentation level and closing the loop/block. This decreases writability.
We meet yet again through my blog and this time I'm gonna talk about my seventh assignment from Mr. Tri Djoko Wahjono, Ir, M.Sc. It is about "Expressions and Assignment Statements", the seventh chapter from Sebesta's Programming Language Concepts book, so i answered 5 questions from the review questions and 5 questions from the problem sets. So here is my answer for them :
Review Questions:
1.Define operator precedence and operator
associativity.
·Operator precedence: defines order and
priority of the operator evaluation from different precedence levels.
·Operator associativity: defines the order of operators’
evaluation when it is form the same precedence level.
2.What is a ternary operator?
·Ternary operator is conditional expressions
that can be used where any other expression can be used.
3.What is a prefix operator?
·Prefix operator is an operator that precede
their operands
4.What operator usually has right associativity?
·Operator that usually has right associativity
are operator which can be found in Fortran and Ruby.
5.What is a non-associative operator?
·Non-associative operator means that the
expression is illegal.
Problem set:
1.When might you want the compiler to ignore type
differences in an expression?
·Let Type is a sub-range of Integer. It may be
useful for the difference between Type and Integer to be ignored by the
compiler in an expression.
2.State your own arguments for and against
allowing mixed-mode arithmetic expressions.
·For : A mixed mode arithmetic expression is
needed in calculating expressions that might have decimal results. It is
compulsory as it allows two different type of number data type such as float
and integer to be summed without losing the precision of the float.
·Against : While it is compulsory to have
mixed-mode expressions, it is more error prone when expressions made are more
likely to have non-decimal results. A mixed mode might produce a decimal result
even though the result wanted is a non-decimal.
3.Do you think the elimination of overloaded
operators in your favorite language would be beneficial? , Why or why not?
·No, it would not be beneficial. Overloading
operator would be a helpful feature in developing a complex program with
complex arithmetic operation as well. It allows developers to create a class which
function can replace countless lines of codes with an operator. This clearly
will help a readability and writability of a program. Eliminating overloaded
operators would null this advantage.
4.Would it be a good idea to eliminate all
operator precedence rules and require parentheses to show the desired
precedence in expressions? Why or why not?
·No, it would not be a good idea. Although this
custom precedence sounds like increasing flexibility, requiring parentheses to
show a custom precedence would impact in readability and writability of a
program.
5.Should C’s assigning operations (for example,
+=) be included in other languages (that do not already have them)? Why or why
not?
·Yes, because it’s simpler than the normal
<operand> = <operand> + <some values> form, and it can save
much time.
Before you read this post why don't you watch the trailer of this awesome movie :
Well, after watching that trailer i think it piques your interest to watch the movie when it released right ?
Back to the main topic, here is the answer for the sixth chapter "Data Types" of Sebesta's Programming Language Concepts book
Review Questions :
1. What is a descriptor?
Descriptor is the collection of the attributes of a variable.
2. What are the advantages and disadvantages of decimal data types?
The advantages of decimal data types is being able to precisely store decimal values, at least those within a restricted range, which can’t be done with floating-point. And the disadvantages of decimal data types are that the range of values is restricted because no exponents are allowed, and their representation in memory is mildly wasteful.
3. What are the design issues for character string types?
The two most important design issues that are specific to character string types are the following:
Should strings be simply a special kind of character array or a primitive type?
Should strings have static or dynamic length?
4. Describe the three string length options.
Static length string: the length can be static and set when the string is created.
Limited dynamic length strings: allow strings to have varying length up to a declared and fixed maximum set by the variable’s definition.
Dynamic length strings: allow strings to have varying length with no maximum.
5. Define ordinal, enumeration, and subrange types.
Ordinal type is one in which the age of possible values can be easily associated with the set of positive integers. In Java, for example, the primitive ordinal types are integer, char, and Boolean.
Enumeration and subrange are the two user-defined ordinal type that have been supported by the programming languages.
Enumeration type is one in which all of the possible values, which are named constants, are provided, or enumerated, in the definition. Enumeration types provide a way of defining and grouping collections of named constants, which are called enumeration constants.
Subrange type is a contiguous subsequence of an ordinal type.
Problem Set:
1. What are the arguments for and against representing Boolean values as single bits in memory?
Boolean variables stored as single bits are very space efficient, but on most computers access to them is slower than if they were stored as bytes.
2. How does a decimal value waste memory space?
Decimal types are stored very much like character strings, using binary codes for the decimal digits. These representations are called binary coded decimal (BCD). In some cases, they are stored one digit per byte, but in others, they are packed two digits per byte. Either way, they take more storage than binary representations. It takes at least four bits to code a decimal digit. Therefore, to store a six-digit coded decimal number requires 24 bits of memory. However, it takes only 20 bits to store the same number in binary.
3. VAX minicomputers use a format for floating-point numbers that is not the same as the IEEE standard. What is this format, and why was it chosen by the designer of the VAX computers? , A reference for VAX floating-point representation is Sebesta (1991).
The existing DEC VAX formats, inherited from the PDP-11, because The PDP-11 had several uniquely innovative features, and was easier to program than its predecessors through the additional general-purpose registers.
4.Compare the tombstone and lock-and-key methods of avoiding dangling pointers, from the point of view of safety and implementation cost.
Tombstones are costly in both time and space. Because tombstones are never deallocated, their storage is never reclaimed. Every access to a heap- dynamic variable through a tombstone requires one more level of indirection, which requires an additional machine cycle on most computers. Apparently none of the designers of the more popular languages have found the additional safety to be worth this additional cost, because no widely used language uses tombstones,
while in lock-&-key pointer values are represented as ordered pairs (key, address), where the key is an integer value. Heap-dynamic variables are represented as the storage for the variable plus a header cell that stores an integer lock value. When a heap-dynamic variable is allocated, a lock value is created and placed both in the lock cell of the heap-dynamic variable and in the key cell of the pointer that is specified in the call to new. Every access to the dereferenced pointer compares the key value of the pointer to the lock value in the heap-dynamic variable. If they match, the access is legal; otherwise the access is treated as a run-time error. Any copies of the pointer value to other pointers must copy the key value. Therefore, any number of pointers can reference a given heap- dynamic variable. When a heap-dynamic variable is deallocated with dispose, its lock value is cleared to an illegal lock value. Then, if a pointer other than the one specified in the dispose is dereferenced, its address value will still be intact, but its key value will no longer match the lock, so the access will not be allowed.
5. What disadvantage are there in implicit dereferencing of pointers, but only in certain contexts? For example, consider the implicit dereference of a pointer to a record in Ada when it is used to reference a record field.
When implicit dereferencing of pointers occurs only in certain contexts, it makes the language slightly less orthogonal. The context of the reference to the pointer determines its meaning. This detracts from the readability of the language and makes it slightly more difficult to learn.
We meet yet again through my blog and this time I'm gonna talk about my fifth assignment from Mr. Tri Djoko Wahjono, Ir, M.Sc. It is about " Names, Bindings, and Scopes", the fifth chapter from Sebesta's Programming Language Concepts book, so i answered 5 questions from the review questions and 5 questions from the problem sets. So here is my answer for them :
Review Question :
What are the design issues for names?
There are two primary design issues for names, first we have “are names case sensitive?”, and the second one we have ”are the special words of the language reserved words or keywords?".
What is the potential danger of case-sensitive names?
Writ ability and readability (because sometimes names that look alike are actually different).
In what way are reserved words better than keywords?
Reserved words are better than keywords because the ability to redefine keywords can be confusing.
What is an alias?
When more than one variable can be used to access the same memory location, the variables are called aliases.
Which category of C++ reference variables is always aliases?
Union types.
Problem set:
Which of the following identifier forms is most readable? Support your decision.
SumOfSales
sum_of_sales
SUMOFSALES
sum_of_sales, because there are underscore it’s easier to read and define its function.
Some programming languages are typeless. What are the obvious advantages and disadvantage of having no types in language?
Advantage:
The only advantage is that is allows coders to write sloppy programs quickly.
Disvantage:
You are not in control of the data and variables, the compiler or interpreter is.
If you mis-assign variables, there is no way for the compiler to catch any of your mistakes. It just “does what you said”, even if it is wrong.
Supporting programs in a type-less language is much more difficult that in a strongly types one. It is often very difficult to determine what the original programmer wanted to do.
Write a simple assignment statement with one arithmetic operator in some language you know. For each component of the statement, list the various bindings that are required to determine the semantics when the statement is executed. For each binding, indicate the binding time used for the language
(C++)
int count;count = count + 5;
Possible types for count: set at language design time. Type of count: bound at compile time.
Set of possible values of count: bound at compiler design time. Value of count: bound at execution time with this statement. Set of possible meanings for the operator symbol ““:*bound at language definition time.*Meaning of the operator symbol “” in this statement: bound at compile time.
Internal representation of the literal “5”: bound at compiler design time.
Dynamic type binding is closely related to implicit heap-dynamic variables. Explain this relationship.
Both are related to the assignment and the statement
Describe a situation when a history-sensitive variable in a subprogram is useful.
History sensitive variables may be useful in data manipulation subprograms, where some operation is performed on a variable, and the function exits, then the function is called again. This way, the function doesn’t have to take the variable as a parameter, but only to return it.
We meet yet again through my blog and this time I'm gonna talk about my fourth assignment from Mr. Tri Djoko Wahjono, Ir, M.Sc. It is about " Lexical and Syntax Analysis", the fourth chapter from Sebesta's Programming Language Concepts book, so i answered 5 questions from the review questions and 5 questions from the problem sets. So here is my answer for them :
Review Question Answers
What are three reason why syntax analyser are based on grammars?
Firstly grammars are clear and concise,the next reason is that the grammar description can be use as a direct basis for the analyzer and last grammar are modular or easy to maintain.
Explain the three reason why lexical analysis is separated from syntax analysis
Simplicity: Techniques for lexical analysis are less complex than those required for syntax analysis.
Efficiency: although it pays to optimize the lexical analyzer, because lexical analysis requires a significant portion of total compilation time.
Portability: because the lexical analyzer reads input program files and often includes buffering of that input, it is somewhat platform dependent.
Define lexeme and token.
Lexeme is the logical groupings that the lexical analyzer collects characters into
Token is the internal codes for categories of these groupings.
What are the primary task of lexical analyser?
The primary tasks of a lexical analyzer is to performs syntax analysis at the lowest level of program structure.
Describe briefly the three approaches to building a lexical analyser.
Write a formal description of the token patterns of the language using a descriptive language related to regular expression.
Design a state transition diagram that describes the token patterns of the language and write a program that implements the diagram.
Design a state transition diagram that describes the token patterns of the language and hand-construct a table-driven implementation of the state diagram.
Problem Set Answers
Perform the pairwise disjointness test for the following grammar rules.
Perform the pairwise disjointness test for the following grammar rules.
A. S → aSB | bAA
FIRST(aSb) = {a}, FIRST(bAA) = {b}, passes the test.
B. A → b[aB] | a
FIRST(b{aB}) = {b}, FIRST (a) = {a}, passes the test.
C. B → aB | a
FIRST(aB) = {a}, FIRST(a) = {a}, Fails the test.
Show a trace of the recursive descent parser given in Section 4.4.1 for the string a + b * c
a + b * c
Call lex /* returns a */
Enter <expr>
Enter <term>
Enter <factor>
Call lex /* returns + */
Exit <factor>
Exit <term>
Call lex /* returns b */
Enter <term>
Enter <factor>
Call lex /* returns * */
Exit <factor>
Call lex /* returns c */
Enter <factor>
Call lex /* returns end-of-input */
Exit <factor>
Exit <term>
Exit <expr>
. Show a trace of the recursive descent parser given in Section 4.4.1 for the string a * (b + c)
a * (b + c)
Call lex /* returns a */
Enter <expr>
Enter <term>
Enter <factor>
Call lex /* returns * */
Exit <factor>
Call lex /* return (*/
Enter <factor>
Call lex /* returns b */
Enter <expr>
Enter <term>
Enter <factor>
Call lex /* returns + */
Exit <factor>
Exit <term>
Call lex /* returns c */
Enter <factor>
Exit <term>
Call lex / *return )*/
Exit <factor>
Exit <term>
Exit <expr>
Call lex /* returns end-of-input */
Exit <factor>
Exit <term>
Exit <expr>
Given the following grammar and the right sentential form, draw a parse tree and show the phrases and simple phrases, as well as the handle. S – > aAb | bBA A-> ab|aAB B->aB|b
A. aaAbb
parse tree =
S
/ | \
a A b
/|\
a A B
|
b
Handles = b, aAB
Phrases = aaAbb, aaABb, aAb
Simple Phrase = b
B. bBab
parse tree =
S
/ | \
b B A
/ \
a b
Handles = ab
Phrases = bBab, bBA
Simple phrase = ab
C. aaAbBb
aaAbBb = aSBb = aSBB = x
therefore the last string cannot be derived from given grammar.
We meet yet again through my blog and this time I'm gonna talk about my third assignment from Mr. Tri Djoko Wahjono, Ir, M.Sc. It is about "Describing Syntax and Semantics", the third chapter from Sebesta's Programming Language Concepts book, so i answered 5 questions from the review questions and 5 questions from the problem sets. So here is my answer for them :
Review Questions :
1. Define syntax and semantics.
Syntax is form of expressions, statements and program units in a programming language. while semantics is the meaning of the expressions, statements and program units in a programming language.
2. Who are language descriptions for?
Language descriptions are for initial evaluators, implementors, and users
3. Describe the operation of a general language generator.
A language generator is a device that can be used to generate the sentences of a language. We can think of the generator as having a button that produces a sentence of the language every time it is pushed. Because the particular sentence that is produced by a generator when its button is pushed is unpredictable, a generator seems to be a device of limited usefulness as a language descriptor. For example, to determine the correct syntax of a particular statement using a compiler, the programmer can only submit a speculated version and note whether the compiler accepts it. On the other hand, it is often possible to determine whether the syntax of a particular statement is correct by comparing it with the structure of the generator.
4. Describe the operation of a general language recognizer.
Suppose we have a language L that uses an alphabet of characters. To define L formally using the recognition method, we would need to construct a mechanism R, called a recognition device, capable of reading strings of characters from the alphabet . R would indicate whether a given input string was or was not in L. In effect, R would either accept or reject the given string. Such devices are like filters, separating legal sentences from those that are incorrectly formed. If R, when fed any string of characters over , accepts it only if it is in L, then R is a description of L. Because most useful languages are, for all practical purposes, infinite, this might seem like a lengthy and ineffective process. Recognition devices, however, are not used to enumerate all of the sentences of a language they have a different purpose. The syntax analysis part of a compiler is a recognizer for the language the compiler translates. In this role, the recognizer need not test all possible strings of characters from some set to determine whether each is in the language. Rather, it need only determine whether given programs are in the language. In effect then, the syntax analyzer determines whether the given programs are syntactically correct.
5. What is the difference between a sentence and a sentential form?
A sentence is a sentential form that consist only terminal or lexemes while sentential form is a string that consist of derivations.
Problem Set :
1. The two mathematical models of language description are generation and recognition. Describe
how each can define the syntax of a programming language.
The recognizer doesn't need to test all possible strings of characters from some set to determine whether each is in the language. Rather, it need only determine whether given programs are in the language. In effect then, the syntax analyzer determines whether the given programs are syntactically correct.
To determine the correct syntax of a particular statement using a compiler, the programmer can only submit a speculated version and note whether the compiler accepts it. On the other hand, it is often possible to determine whether the syntax of a particular statement is correct by comparing it with the structure of the generator.
2. Write EBNF descriptions for the following:
a. A Java class definition header statement
<class_head> ® {<modifier>} class <id> [extends class_name] [implements <interface_name> {, <interface_name>}] <modifier> ® public | abstract | final <class_head> ® {<modifier>} class <id> [extendsclass_name] [implements <interface_name> {, <interface_name>}] <modifier> ® public | abstract | final
Hello, welcome back to my blog. This is my second assignment from the same lecture. This time i was required to answer the Review Question and Problem Set from the second chapter of Sebesta's Programming Language Concepts book. Here is the answer for the review questions :
1.In what year was Plankalkül designed? In what year was that design published?
·It was designed in 1945 and the design was published 27 years later or on 1972.
2.What two common data structures were included in Plankalkül?
·The 2 common data structures were arrays and records.
3.How were the pseudocodes of the early 1950s implemented?
·The Pseudocodes were implemented by a pure interpreter.
4.Speedcoding was invented to overcome two significant shortcomings of the
computer hardware of the early 1950s. What were they?
·The shortcomings were the lack of floating point and the lack of automatic incrementing address registers.
5.Why was the slowness of interpretation of programs acceptable in the early 1950s?
·One of the primary reasons why the slowness of interpretive systems was tolerated from the late 1940s to the mid-1950s was the lack of floating-point hardware in the available computers. All floating-point operations had to be simulated in software, a very time-consuming process. Because so much processor time was spent in software floating-point processing, the overhead of interpretation and the simulation of indexing were relatively insignificant.
Here is the answer for the problem set questions :
1.What features of Plankalkül do you think would
have had the greatest influence on FORTRAN 0 if the FORTRAN designers had been
familiar with Plankalkül?
·The feature of Plankalkül that can have a great influenced
on Fortran 0 if the Fortran designers had been familiar with Plankalkül is the inclusion
of mathematical expressions showing the current relationships between program
variables, which could make Fortran 0 compute floating point more faster.
2.Determine the capabilities of Backus’s 701
Speedcoding system, and compare them with those of a contemporary programmable
hand calculator.
·The Speedcoding interpreter effectively
converted the 701 to a virtual three-address floating-point calculator. The
system included pseudo instructions for the four arithmetic operations on
floating-point data, as well as operations such as square root, sine, arc
tangent, exponent, and logarithm. Conditional and unconditional branches and
input/output conversions were also part of the virtual architecture.
Speedcoding included the novel facility of automatically incrementing address
registers. This facility did not appear in hardware until the UNIVAC 1107
computers of 1962. Because of such features, matrix multiplication could be done
in 12 Speedcoding instructions. Backus claimed that problems that could take
two weeks to program in machine code could be programmed in a few hours using
Speedcoding. It means that Backus’s 701 speedcoding system can solve arithmetic
problem much more faster than using programmable hand calculator.
3.Write a short history of the A-0, A-1, and A-2
systems designed by Grace Hopper and her associates.
·Between 1951 and 1953, a team led by Grace
Hopper at UNIVAC developed a series of “compiling” systems named A-0, A-1, and
A-2 that expanded a pseudo- code into machine code subprograms in the same way
as macros are expanded into assembly language. The pseudo-code source for these
“compilers” was still quite primitive, although even this was a great
improvement over machine code because it made source programs much shorter.
Wilkes (1952) independently suggested a similar process. Maurice V. Wilkes
(also at Cambridge) extended the idea to design an assembly program that could
combine chosen subroutines and allocate storage (Wilkes et al., 1951, 1957).
4.As a research project, compare the facilities of
FORTRAN 0 with those of the Laning and Zierler system.
·The Laning and Zierler system (Laning and
Zierler, 1954) was the first algebraic translation system to be implemented. By
algebraic, it means that it translated arithmetic expressions, used separately
coded subprograms to compute transcendental functions (e.g., sine and
logarithm), and included arrays. Such facilities was not found in Fortran 0
that still hadn’t used interpretation on algebraic translation and arrays system.
In fact the facilities in Laning and Zierler system was more superior to
Fortran 0, but it was never escaped MIT which is a shame.
5.Which of the three original goals of the ALGOL
design committee, in your opinion, was most difficult to achieve at that time?
“It should be possible to use the language for the description of algorithms in printed publications” was the most difficult goal to achieve at that time because it was something that entirely new to the computer business, at that time publication for algorithm in the human language is hard to understand since it is a new language that they never used before so they need to know how the language work first.