Prev | Index | Next |
A chart of all the primitive types in the language is shown below.
Type | Description |
---|---|
int | Holds an integer. Size matches the word size of the target processor; no minimum size. |
uint | Unsigned integer. Same size as int. |
float | IEEE 32 bit floating point number (same as C++). |
double | IEEE 64 bit floating point number (same as C++). |
ldouble | IEEE 80 bit floating point number (same as C++ long double). |
bool | Boolean value. Can hold the values true and false . No implicit conversions to or from integers. |
char | Character. No implicit conversions to or from integers. |
thing | Variables of type thing can hold a value of any type. |
There are fewer implicit conversions between types compared to C++. This is meant to improve static checking and prevent errors. Casts between primitive types can be defined by the programmer, as described later, if they prefer C++'s implicit conversions. The only implicit subtyping among primitive types is that int is a subtype of double, which is a subtype of ldouble, and float is a subtype of double.
Type thing
Type thing
is a special type that can hold a value of any type. This also means that its type is not known statically, so to perform operations on a thing (such as adding it to a number) you must first use run-time methods (such as casts, described later) to get it to a value of the type you need.
Type aliases
Type aliases can be defined by defining variables of type "type". For example:
type size := int;
This is like a C typedef, but with less confusing syntax. It works like a substitution: "size" is replaced with "int" throughout its scope.
Integer size
As the table above mentions, there is no minimum integer size; a compiler is even allowed to make an integer as small as 8 bits, or even smaller, if that's the efficient representation for the target platform. The reason for this is that integer enumerations (subrange types) are a future planned feature, which will allow programmers to explicitly specify the domain of a type when they want a type of fixed size. When they want efficiency, the int representation can be used together with out-of-range checking. The point is to get programmers to start thinking of what kind of integer they need, and to specify that in a high-level way, letting the compiler worry about low-level details such as how many bits are in a byte.
Until integer enumerations are implemented, pretending ints are 32 bits is probably OK for the quick-and-dirty programs that are only suitable to write in C* at this time, and is probably not much worse than the way programmers normally handle overflow anyway.
Prev | Index | Next |