answersLogoWhite

0

No. A small standard deviation with a large mean will yield points further from the mean than a large standard deviation of a small mean. Standard deviation is best thought of as spread or dispersion.

User Avatar

Wiki User

13y ago

Still curious? Ask our experts.

Chat with our AI personalities

JordanJordan
Looking for a career mentor? I've seen my fair share of shake-ups.
Chat with Jordan
LaoLao
The path is yours to walk; I am only here to hold up a mirror.
Chat with Lao
BlakeBlake
As your older brother, I've been where you are—maybe not exactly, but close enough.
Chat with Blake

Add your answer:

Earn +20 pts
Q: Is the standard deviation best thought of as the distance from the mean?
Write your answer...
Submit
Still have questions?
magnify glass
imp
Continue Learning about Statistics

Why standard deviation is best measure of dispersion?

standard deviation is best measure of dispersion because all the data distributions are nearer to the normal distribution.


Is the line of best fit the same as linear regression?

Linear Regression is a method to generate a "Line of Best fit" yes you can use it, but it depends on the data as to accuracy, standard deviation, etc. there are other types of regression like polynomial regression.


What is the relationship between confidence interval and standard deviation?

Short answer, complex. I presume you're in a basic stats class so your dealing with something like a normal distribution however (or something else very standard). You can think of it this way... A confidence interval re-scales margin of likely error into a range. This allows you to say something along the lines, "I can say with 95% confidence that the mean/variance/whatever lies within whatever and whatever" because you're taking into account the likely error in your prediction (as long as the distribution is what you think it is and all stats are what you think they are). This is because, if you know all of the things I listed with absolute certainty, you are able to accurately predict how erroneous your prediction will be. It's because central limit theory allow you to assume statistically relevance of the sample, even given an infinite population of data. The main idea of a confidence interval is to create and interval which is likely to include a population parameter within that interval. Sample data is the source of the confidence interval. You will use your best point estimate which may be the sample mean or the sample proportion, depending on what the problems asks for. Then, you add or subtract the margin of error to get the actual interval. To compute the margin of error, you will always use or calculate a standard deviation. An example is the confidence interval for the mean. The best point estimate for the population mean is the sample mean according to the central limit theorem. So you add and subtract the margin of error from that. Now the margin of error in the case of confidence intervals for the mean is za/2 x Sigma/ Square root of n where a is 1- confidence level. For example, confidence level is 95%, a=1-.95=.05 and a/2 is .025. So we use the z score the corresponds to .025 in each tail of the standard normal distribution. This will be. z=1.96. So if Sigma is the population standard deviation, than Sigma/square root of n is called the standard error of the mean. It is the standard deviation of the sampling distribution of all the means for every possible sample of size n take from your population ( Central limit theorem again). So our confidence interval is the sample mean + or - 1.96 ( Population Standard deviation/ square root of sample size. If we don't know the population standard deviation, we use the sample one but then we must use a t distribution instead of a z one. So we replace the z score with an appropriate t score. In the case of confidence interval for a proportion, we compute and use the standard deviation of the distribution of all the proportions. Once again, the central limit theorem tells us to do this. I will post a link for that theorem. It is the key to really understanding what is going on here!


Mutual funds performance is best measured by?

Mutual fund performance is best measured by:Growth in the total Assets under managementSteady Growth in the NAV of the fund houseMinimal fund management chargesComparison with the benchmark index and its peers


What is the probabiliyty of randomly picking a green card from a standard deck of playing cards?

To the best of my knowledge, a standard deck of playing cards only has black and red cards, so the odds are zero.