function readOnly(count){ }
Starting November 20, the site will be set to read-only. On December 4, 2023,
forum discussions will move to the Trailblazer Community.
+ Start a Discussion
SowjanyaSowjanya 

File size differs from outside to browser

Hi,

 

I am uploading a file through HTML5. File size differs from outside to browser. 

 

I my case, the file size outside is 4.39 MB. When i upload and check in Chrome or Firefox, the file size displays as 4.6 MB

 

Can anyone let me know how to make the browser to take the exact size of the file.

 

Thanks in Advance

Sowjanya

Best Answer chosen by Admin (Salesforce Developers) 
sfdcfoxsfdcfox

Sounds like the problem isn't the file size, but instead the display of that value in the program you're looking at.

 

This arises because there are two definitions of megabytes: The base 2 (binary) definition, and the base 10, or decimal, notation.

 

The base 2 definition states that a kilo is 2^10, or 1024. Thus, a kilobyte is 1,024 bytes. Appropriately, then, mega is 2^10 * 2^10, which is 2^(10+10) (or 2^20, which is 1048576), so a megabyte is 1048576 bytes. A single allocation unit on a hard drive is a multiple of half a kilobyte, or 512 bytes. System memory is measured in base 2 megabytes. Hardware uses this definition, and Microsoft also uses this definition in their core system ("Explorer"). All of this works well, because the processor can easily address base 2 memory locations easier than base 10 notation.

 

However, here comes the monkey wrench in all of this. The base 10 definition states that a kilo is 10^3, or 1,000, which makes a kilobyte 1,000 bytes. A megabyte then becomes 10^6, or 1,000,000 bytes. Hardware manufacturers advertise their storage capacity using this definition. This means that each megabyte is shorted 48,576 bytes. In addition, a giga would be 2^30 in base 2, but is only 10^9 in base 10. Therefore, a hardware gigabyte is 1,073,741,824 bytes, but a hardware manufacturer will only advertise 1,000,000,000 bytes, for a net loss of over 73 million bytes. In all fairness, those bytes are usually reserved as "bad sector" replacements in a modern drive. (EDIT: Some programmers also use this definition in order to confuse users.)

 

Next, you have the factor of rounding. Depending on whom you're asking, a 4.39 MB file in binary might actually be between 4,592,736 and 4,613,734 bytes, while that same file in decimal might be 4,500,001 to 4,699,999 bytes. (depending on if they're using 1 or 2 decimal places, and if they're rounding up, rounding half, banker's rounding, etc).

 

As you can see, 4,592,736 through 4,613,734 falls inbetween 4,500,001 and 4,699,999. It's fairly safe to assume that your data is still intact, but the difference you're seeing is the ruler you're measuring by. You will need to use the "File Properties" dialog to see how many bytes the file actually is.

All Answers

sfdcfoxsfdcfox

Sounds like the problem isn't the file size, but instead the display of that value in the program you're looking at.

 

This arises because there are two definitions of megabytes: The base 2 (binary) definition, and the base 10, or decimal, notation.

 

The base 2 definition states that a kilo is 2^10, or 1024. Thus, a kilobyte is 1,024 bytes. Appropriately, then, mega is 2^10 * 2^10, which is 2^(10+10) (or 2^20, which is 1048576), so a megabyte is 1048576 bytes. A single allocation unit on a hard drive is a multiple of half a kilobyte, or 512 bytes. System memory is measured in base 2 megabytes. Hardware uses this definition, and Microsoft also uses this definition in their core system ("Explorer"). All of this works well, because the processor can easily address base 2 memory locations easier than base 10 notation.

 

However, here comes the monkey wrench in all of this. The base 10 definition states that a kilo is 10^3, or 1,000, which makes a kilobyte 1,000 bytes. A megabyte then becomes 10^6, or 1,000,000 bytes. Hardware manufacturers advertise their storage capacity using this definition. This means that each megabyte is shorted 48,576 bytes. In addition, a giga would be 2^30 in base 2, but is only 10^9 in base 10. Therefore, a hardware gigabyte is 1,073,741,824 bytes, but a hardware manufacturer will only advertise 1,000,000,000 bytes, for a net loss of over 73 million bytes. In all fairness, those bytes are usually reserved as "bad sector" replacements in a modern drive. (EDIT: Some programmers also use this definition in order to confuse users.)

 

Next, you have the factor of rounding. Depending on whom you're asking, a 4.39 MB file in binary might actually be between 4,592,736 and 4,613,734 bytes, while that same file in decimal might be 4,500,001 to 4,699,999 bytes. (depending on if they're using 1 or 2 decimal places, and if they're rounding up, rounding half, banker's rounding, etc).

 

As you can see, 4,592,736 through 4,613,734 falls inbetween 4,500,001 and 4,699,999. It's fairly safe to assume that your data is still intact, but the difference you're seeing is the ruler you're measuring by. You will need to use the "File Properties" dialog to see how many bytes the file actually is.

This was selected as the best answer
SamuelDeRyckeSamuelDeRycke

This must be one of the most complete answers I've ever seen inhere! :)

Venkata SowjanyaVenkata Sowjanya

Thanks for the Explanation.

The problem is with my calculation only.

 

Thank you so much . My problem is solved