This may be a very trivial question for most of you coders out there but i am just starting coding in c and any help would be very much appreciated.

For a school assignment, I need to write a program that would convert digits from ieee 754 to decimal and i simply dont know where to start since i am just starting to learn the c language.

I know java relatively well, but i have pretty much nothing on c...

Can anybody please tell me what would be the best way to do this operation?

Not entirely sure what you mean by "converter" are you perhaps referring to something like this? http://www.h-schmidt.net/FloatConverter/

For which you really only need to understand the IEEE754 standard, which is just how decimal approximations(floating point values) are handled.

As demonstrated in the above converter, it boils down to separating the bits to represent different parts of the value.

Bit 1[1 bit field] = Sign bit, determines whether the value is positive or negative.

Bits 2-9[8 bit field] = Represent an exponent of the order 2^x, where x is -127 to 128 for the integer part of your value.

Bits 10-32[23 bit field] = Are your "mantissa" values, in which every bit starting with 10 is assigned a value of 1/(2^n), where n is the listing of successive mantissa bits. Ie.

Bit 10 - 1st mantissa (1/2^1) = 1/2

Bit 11 - 2nd mantissa (1/2^2) = 1/4

Bit 12 - 3rd mantissa (1/2^3) = 1/8

Bit 13 - 4th mantissa (1/2^4) = 1/16

. . .

Bit 31 - 22nd mantissa (1/2^22) = 1/4194304

Bit 32 - 23rd mantissa (1/2^23) = 1/8388608

Standard floats have only 32 bits to store their value, doubles have 64 values. The more bits you have, the more accurate your approximation. For C, your compiler can determine which data types are at your disposal and this will change the accuracy of your desired decimal.

I may have made a few hiccups along the way writing this, but the site above should be a nudge in the right direction at least. This video may also be able to help: http://www.youtube.com/watch?v=Zzx7HN4wo3A