answersLogoWhite

0

0.9+0.53

Updated: 9/27/2023
User Avatar

For Family

Lvl 1
3y ago

Best Answer

text[Limit - i + 1]:=tdigit[digit[i]]; % Reversing the order. next i; % Arabic numerals put the low order last. Print text," = ",n,"!"; % Print the result! next n; % On to the next factorial up. END; With the example in view, a number of details can be discussed. The most important is the choice of the representation of the big number. In this case, only integer values are required for digits, so an array of fixed-width integers is adequate. It is convenient to have successive elements of the array represent higher powers of the base. The second most important decision is in the choice of the base of arithmetic, here ten. There are many considerations. The scratchpad variable d must be able to hold the result of a single-digit multiply plus the carry from the prior digit's multiply. In base ten, a sixteen-bit integer is certainly adequate as it allows up to 32767. However, this example cheats, in that the value of n is not itself limited to a single digit. This has the consequence that the method will fail for n > 3200 or so. In a more general implementation, n would also use a multi-digit representation. A second consequence of the shortcut is that after the multi-digit multiply has been completed, the last value of carry may need to be carried into multiple higher-order digits, not just one. There is also the issue of printing the result in base ten, for human consideration. Because the base is already ten, the result could be shown simply by printing the successive digits of array digit, but they would appear with the highest-order digit last (so that 123 would appear as "321"). The whole array could be printed in reverse order, but that would present the number with leading zeroes ("00000...000123") which may not be appreciated, so this implementation builds the representation in a space-padded text variable and then prints that. The first few results (with spacing every fifth digit and annotation added here) are: This implementation could make more effective use of the computer's built in arithmetic. A simple escalation would be to use base 100 (with corresponding changes to the translation process for output), or, with sufficiently wide computer variables (such as 32-bit integers) we could use larger bases, such as 10,000. Working in a power-of-2 base closer to the computer's built-in integer operations offers advantages, although conversion to a decimal base for output becomes more difficult. On typical modern computers, additions and multiplications take constant time independent of the values of the operands (so long as the operands fit in single machine words), so there are large gains in packing as much of a bignumber as possible into each element of the digit array. The computer may also offer facilities for splitting a product into a digit and carry without requiring the two operations of mod and div as in the example, and nearly all arithmetic units provide a carry flag which can be exploited in multiple-precision addition and subtraction. This sort of detail is the grist of machine-code programmers, and a suitable assembly-language bignumber routine can run much faster than the result of the compilation of a high-level language, which does not provide access to such facilities. For a single-digit multiply the working variables must be able to hold the value (base-1)2 + carry, where the maximum value of the carry is (base-1). Similarly, the variables used to index the digit array are themselves limited in width. A simple way to extend the indices would be to deal with the bignumber's digits in blocks of some convenient size so that the addressing would be via (block i, digit j) where i and j would be small integers, or, one could escalate to employing bignumber techniques for the indexing variables. Ultimately, machine storage capacity and execution time impose limits on the problem size. IBM's first business computer, the IBM 702 (a vacuum-tube machine) of the mid-1950s, implemented integer arithmetic entirely in hardware on digit strings of any length from 1 to 511 digits. The earliest widespread software implementation of arbitrary-precision arithmetic was probably that in Maclisp. Later, around 1980, the operating systems VAX/VMS and VM/CMS offered bignum facilities as a collection of string functions in the one case and in the languages EXEC 2 and REXX in the other. An early widespread implementation was available via the IBM 1620 of 1959–1970. The 1620 was a decimal-digit machine which used discrete transistors, yet it had hardware (that used lookup tables) to perform integer arithmetic on digit strings of a length that could be from two to whatever memory was available. For floating-point arithmetic, the mantissa was restricted to a hundred digits or fewer, and the exponent was restricted to two digits only. The largest memory supplied offered 60 000 digits, however Fortran compilers for the 1620 settled on fixed sizes such as 10, though it could be specified on a control card if the default was not satisfactory. Arbitrary-precision arithmetic in most computer software is implemented by calling an external library that provides data types and subroutines to store numbers with the requested precision and to perform computations. Different libraries have different ways of representing arbitrary-precision numbers, some libraries work only with integer numbers, others store floating point numbers in a variety of bases (decimal or binary powers). Rather than representing a number as single value, some store numbers as a numerator/denominator pair (rationals) and some can fully represent computable numbers, though only up to some storage limit. Fundamentally, Turing machines cannot represent all real numbers, as the cardinality of ℝ exceeds the cardinality of ℤ.

User Avatar

Kayley Johns

Lvl 10
3y ago
This answer is:
User Avatar
More answers
User Avatar

Juvenal Dare

Lvl 10
3y ago

1.4300000000000002

This answer is:
User Avatar

Add your answer:

Earn +20 pts
Q: 0.9+0.53
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Math & Arithmetic
Related questions

What is the time rotation of earth?

23 hours 56 minutes 04. 09053 seconds, exactly


How fast does the earth revolve each day?

Do you mean how fast does the earth rotate? The Earth rotates once in a few minutes under a day (23 hours 56 minutes 04. 09053 seconds). This is called the sidereal period. The sidereal period is not exactly equal to a day because by the time the Earth has rotated once, it has also moved around the Sun, so it has to keep rotating for about another 4 minutes before the Sun seems to be back in the same place in the sky that it was in exactly a day before. As far as revolving around the sun, You can't measure the speed of an object by itself, it has to be measured relative to something else (this was one of Einstein's realizations). If we ask the question, "How fast the Earth is moving?" we have to specify that we want the speed with respect to another object. Motion cannot be measured without a reference point. We can ask how fast the earth is moving with respect to its own axis, the Sun, the Milky Way Galaxy, or our Local Group of galaxies


What is The velocity of the gravity of the earth?

At the Equator it spins at 1,000 mph. At 60 degrees N/S, the circumference is half the distance than at the Equator so it inly spins at half the speed ; 500 mph. At the poles N/S it spins on the spot. 0 mph.