+ 21
What is bit????!!
¿Qué es bit?
25 Answers
+ 17
Bit is the smallest unit of memory ( Like KB(Kilo Bytes) , MB(Mega Bytes) , GB(Gega Bytes)),
A single byte consists of 8 bits.
8 bits = 1 byte
+ 16
https://en.wikipedia.org/wiki/Bit
The bit is the most basic unit of information in computing and digital communications. The name is a contraction of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1"or"0", but other representations such as true/false, yes/no, +/−, or on/off are commonly used.
The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.
The symbol for the binary digit is either 'bit' per recommendation by the IEC 80000-13:2008 standard, or the lowercase character 'b', as recommended by the IEEE 1541-2002 standard.
A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight binary digits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two.
In information theory, one bit is the information entropy of a binary random variable that is 0 or 1 with equal probability,[2] or the information that is gained when the value of such a variable becomes known.[3][4] As a unit of information, the bit is also known as a shannon,[5] named after Claude E. Shannon.
+ 7
In other words (referring to the exhaustive description by @Jakko Jak), a bit (binary digit) is the most rudimentary unit of data in any field. It merely represents a true or a false value, on or off, is the stove lit, or is it out.
Strings of these bits can be used to represent numbers (fractional and integral), letters, or by their own nature, a boolean, or logical true or false value. The most common way of visualizing these bits are with 1's and 0's.
You can keep stringing bits forever if you want, but generally we don't go overboard. There are bits (1 bit, of course), nibbles (4 bits, not very common to hear in practice, oddly enough), bytes (8 bits, most common number of bits), word (16 bits, or 2 bytes, ancient pc's that used dos worked with this), double-word (32 bits, 4 bytes, 2 words, x86 processors used this bit width), and quad-word (64 bits, 8 bytes, 4 words, 2 double-words, processors these days, the x64 variety, use this number of bits for computations).
Now let's put bits to practical use. In typical integer format, each bit has a place value, with a magnitude equal to the 2^(n - 1), or by powers of 2. For example, in the nibble (a string of 4 bits):
1001
the LSB (least significant bit, or one furthest to the right), has a magnitude of 2^(1 - 1) = 1,
while the MSB (most significant bit, or the one furthest to the left), has a magnitude of 2^(4 - 1) = 8. The sum of these 1's is 8+ 1 = 9. 1001 = 9 in decimal
Fractional representations are interesting, as they come with some double-edgeness because precision becomes an all important factor. The typical 32 bit floating-point format takes up a double-word:
1 bit for sign
8 bits for characteristic
23 bits for mantissa
0 00000000 00000000000000000000000
And finally, for characters, or letters. ASCII and UNICODE both handle this with both American and International standards, respectively. ASCII follows a 7 bit format, to a byte later on, and UNICODE takes up a word.
And that should be the basics. Have fun!
+ 7
A bit is the most basic unit of information. It can only take two values, 0 or 1. You can think of it as 0 (off), 1 (on). Each information that a computer processes is essentially a group of bits, that is, a group of electrical impulses that it can understand, in fact, it is the only thing that a computer really understands.
The binary system is the mathematical system used to represent the bits. Another thing that might confuse you, the (Machine Language). As I said, bits are the only thing a computer can understand, well, the Machine Language relies on this concept, being the first Language and the first programmable level of a computer, however, it is quite difficult to program a computer in this way, and it is for this season. that technicians started to work to find new ways for controlling a computer, in an easier way.
Thus it was, in fact, that programming languages such as Assembly Language appeared, and later intermediate and high-level programming languages.
+ 4
Caleb Guerra Ortega 🇨🇺 👨🏻💻 A bit just means the smallest unit of binary data. It's either a High (1) or a Low (0).
+ 4
Thanks Calvin Thomas PARVIK PARASHAR
+ 4
Bit - Binary digit
Whose value can only be either '0'or '1'.
It is the basic unit of information in computing.
+ 3
The basic unit an information, it's a collection of binary digits, 0 and 1.
One(on), zero(off)
+ 3
Bit stands for Binary Digit.
Bit is the most basic unit of the base-2 number system or binary number system.
If you don't know what number system is then
remember that it is just a method to express numbers.
Binary numbers are expressed with only two symbols "0 and 1"
Every single digit in a binary number is called a bit.
For an example
1001 is a binary number.
This number has got 4 digits.
So we can easily say that there are 4 bits.
8 bits make up a byte.
Because every computer and computer based device in this world uses binary systems in their logic gates So bits and bytes have a great relation with computing.
+ 3
A bit is a one or a zero :D
+ 3
https://en.wikipedia.org/wiki/Bit
The bit is the most basic unit of information in computing and digital communications. The name is a contraction of binary digit.[1] The bit represents a logical state with one of two possible values. These values are most commonly represented as either "1"or"0", but other representations such as true/false, yes/no, +/−, or on/off are commonly used.
The correspondence between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device.
The symbol for the binary digit is either 'bit' per recommendation by the IEC 80000-13:2008 standard, or the lowercase character 'b', as recommended by the IEEE 1541-2002 standard.
A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight binary digits is called
+ 3
Thanks to all
+ 3
El bit es la unidad mínima de información empleada en informática, en cualquier dispositivo digital, o en la teoría de la información. Con él, podemos representar dos valores cualesquiera, como verdadero o falso, abierto o cerrado, blanco o negro, norte o sur, etc.
+ 2
+ 2
+ 2
The bit is the most basic unit of information in computing and digital communications.
The name is a contraction of binary digit.
1 bits = 0.125 Bytes
8 bits = 1Byte
+ 2
bit means binary digit
+ 2
Bit is your money in Sololearn, you can get them by different ways... :)
+ 1
Yash Shinde but the smallest unit to store value is byte
+ 1
The smallest unit of memory