Decimal to Binary Using a StackThere is no need to program a computer to convert from decimal to binary because all programming languages do this by default; they are binary computers after all. That is, if we want to store the decimal value 42 in computer memory, we simply assign the literal constant 42 to some variable or constant. But the value that is actually assigned is really the binary value 00101010. Moreover, even if we were to assign the hexadecimal value 0x2A or even the octal value 052, the value will still be automatically converted to the binary value 00101010. These are all different representations of the decimal value forty-two, but 00101010 is the only way that value can be physically stored in computer memory.
Of course we may choose to store the value as a string, "42", however it is no longer a numeric value at that point, it is a character array. However, most languages will provide some library function that can convert the string representation of a numeric value into their actual numeric representation. In C, for instance, we would simply use the atoi() function. Nevertheless, it is not something we need to specifically cater for; the function does all the hard work for us, converting the string, "42", into the equivalent binary value 00101010.
However, although decimal integer values are always stored in native binary code, presenting that binary value to the user isn't as straightforward. All languages will automatically convert the binary value back to decimal for display purposes, but what it's actually doing
Behind the Scenes is converting the binary value 00101010 into the string "42" and then printing the string. In other words, it is the reverse of the atoi() function, essentially the equivalent of the itoa() function.
Given that we don't need to program either of these conversions, it's easy to forget that these conversions are taking place. Given that humans work predominantly in decimal it is only natural that these conversions be done automatically for us, but it is a high-level concept; it is an abstraction. While there's nothing wrong in thinking at a high-level, it's important to be aware of the low-level operations that are actually taking place, because the more we know about what's really going on behind the scenes, the more easily we can exploit them. The high-level concept of converting from decimal to binary is only one such example where we can exploit the fact that the conversion to binary is already done for us behind the scenes. All we really need to do is present the result to the user.
To achieve this we need to convert the binary value to a string, in much the same way as the itoa() function converts a binary value into a decimal string, except we convert to a binary string. A stack makes this incredibly simple. We simply examine the low-order bit and push a '1' character onto the stack if the bit is set, otherwise we push a '0'. We then shift all the bits one bit to the right and repeat the process. We continue in this manner until the binary value is zero. If the number of elements on the stack is not an exact multiple of 8, we can (optionally) push additional '0' characters onto the stack for padding. Finally, we print the top-most element on the stack and then pop it off the stack, repeating until the stack is empty.
Thus if we use the value 00101010 once more, we examine the low-order bit. It is not set so we push a '0' onto the (empty) stack. We then shift the bits one place to the right, giving us 00010101. The low-order bit is now set so we push a '1'. We repeat the process, pushing a '0', then a '1', another '0' and another '1'. At this stage, the binary value is 00000000 so we are done. The stack has only six elements so we (optionally) push another two '0' characters onto the stack for padding. Now we begin popping the stack, printing the top-most element before each pop. When we are done, we will have output the character sequence "00101010".
The following code in C++ shows how this might be implemented as a function. The return value is a string containing the binary representation of the function argument, dec.
std::string dec2bin(int dec) {
std::stack
s {}; initialise an empty stack of characters
while (dec) {
if (dec & 0x1) // if the low-order bit is set...
s.push ('1');
else
s.push ('0');
dec >>= 0x1; // shift-right
}
while (s.size() % 8) s.push('0'); // pad the stack to the nearest 8-bit boundary std::string bin {}; // initialise return value (empty string)
while (!s.empty()) {
bin.append (s.top());
s.pop();
}
return bin;
}
Example usage:
int main() {
std::cout << "Enter an integer value: ";
int i;
std::cin >> i;
std::cout << "The decimal value " << i << " in binary is " << dec2bin(i) << std::endl;
}
Factorials Using a Stack
The question regarding factorials is easier to answer because the factorial algorithm is a naturally recursive algorithm, and whenever we recurse through a function we automatically use the call stack. The call stack is a bit more complex than an ordinary stack but it is a stack nonetheless.
The factorial of any positive value n (denoted n!) is the product of all integers in the closed range [1:n]. Thus factorial 6 (denoted 6!) is 1*2*3*4*5*6=720. We can easily implement this function using a recursive function as follows:
unsigned fact (unsigned n) {
if (n<2) return 1;
return n * fact (n-1);
}
Note that all recursive functions must have an end condition. Given that 0! and 1! are both 1, any positive value less than 2 represents the required end condition, where we simply return the value 1. Otherwise we return the product of the value and the factorial of the value minus 1.
The only problem with this is that the return value would normally be either 32-bit or 64-bit. A 32-bit unsigned integer is only capable of storing values in the closed range [0:4,294,967,295] so 12! is the largest factorial it can handle because 13! is 6,227,020,800. With 64-bit unsigned integers the range increases to [0:18,446,744,073,709,551,615] but 20! is the largest factorial you can handle because 21! is 51,090,942,171,709,440,000.
Using floating point types such as long long double will give more headroom to play with but, as you can hopefully appreciate, even very small numbers have extremely high factorials, so the risk of overflow cannot be avoided. Even the
Microsoft Windows Scientific calculator can only handle factorials up to 3248!
There are two possible solutions to this. The first is to examine the function argument and throw an exception if it exceeds the abilities of the function:
unsigned fact (unsigned n) {
if (n<2) return 1;
if (12
return n * fact (n-1);
}
The problem with this is that you must handle the exception every time you call the function:
int main() {
try {
fact (13);
}
catch (std::range_error& e) {
std::cerr << "out of range" << std::endl;
}
}
The alternative is to create a user-defined type that is capable of exceeding the bounds imposed by the architecture, typically by using a string representation of the value. That's beyond the scope of this answer, however you can find open-source libraries for most languages that can provide this functionality if you require it. It's not a common requirement hence most languages do not include it as standard, but it makes no sense to implement your own "big integer" library when there are so many free libraries available.