5.0 KiB
Boolean questions
Create the following variables.
a = Bits("11110000")
b = Bits("10101010")
For each of the following bytes, give an equivalent
expression which uses only a
, b
, and bit operators.
The answers to the first two questions are given.
- 01010101
~b
- 00000101
~a & ~b
- 00000001
a & ~a
- 10000000
a
- 01010000
b & ~a
- 00001010
b & a
- 01010000
b & ~a
- 10101011
b ^ a
Integer questions
These questions are difficult! Try exploring ideas with Bits
in Terminal, a paper and pencil, and a whiteboard. And definitely
talk with others.
- If
a
represents a positive integer, andone = Bits(1, length=len(a))
, give an expression equivalent to-a
, but which does not use negation.
~a + one
- It is extremely easy to double a binary number: just shift all the bits
to the left. (
a << 1
is twicea
.) Explain why this trick works.
The way I envision this trick is similar to when you mutiply something by 10. You simply add a zero to the end of the number and that is the answer because it moves the entire a space to the left, therefore moving it's tens position. With the same idea, (using the examples from the lab) Bits(4, length = 8) which is 00000100. Then Bits(8, length = 8) wld become 000010000 because the zero is added at the end, one is dropped from the beginning. Then the byte is doubled because the entire "number" now reads as being moved up one space.
- Consider the following:
>>> hundred = Bits(100, 8)
>>> hundred
01100100
>>> (hundred + hundred)
11001000
>>> (hundred + hundred).int
-56
Apparently 100 + 100 = -56. What's going on here?
hundred = Bits(100, 8) hundred 01100100 This first section indicates that there is a byte that is 8 bits long that stands for hundred. This would be 01100100. When you add (hunderd to hunderd) in the following line, it works like the question before where the number is doubled and changes accordingly.
Therefore, (hundred + hundred) is 11001000 (as indicated in line 73). Then we have the weird lines.
(hundred + hundred).int -56 Adding .int to the end of the equation changes what the numbers stand for. If we take the byte 11001000, the '1' means that the number it stands for will be negative. By adding the two together it runs over the 8 bit integer that is defined by the first line. >>> hundred = Bits(100, 8). Therefore, the number represented would be negative and could not be 200, becasue when added, there is more than 8 integers being called for.
-
What is the bit representation of negative zero? Explain your answer. 00000000 This would be the bit represetation of -0 becuase the first 0 bit in the byte would signify that it is a negative number and the remainder of the 0's would indicate the 0 integer.
-
What's the largest integer that can be represented in a single byte? Explain your reasoning.
127 is the largest that can be represented by this format because you would need to use the first bit space to differentiate between positive and negative then the reamining combinations would leave possibility for 127.
-
What's the smallest integer that can be represented in a single byte? Explain your reasoning.
-127 for the same reasoning. The numbers can be positive or negative then the reamining bits would make the byte.
-
What's the largest integer that can be represented in
n
bits? Explain your reasoning.
Text questions
-
Look at the bits for a few different characters using the
utf8
encoding. You will notice they have different bit lengths:>>> Bits('a', encoding='utf8') 01100001 >>> Bits('ñ', encoding='utf8') 1100001110110001 >>> Bits('♣', encoding='utf8') 111000101001100110100011 >>> Bits('😍', encoding='utf8') 11110000100111111001100010001101
When it's time to decode a sequence of utf8-encoded bits, the decoder somehow needs to decide when it has read enough bits to decode a character, and when it needs to keep reading. For example, the decoder will produce 'a' after reading 8 bits but after reading the first 8 bits of 'ñ', the decoder realizes it needs to read 8 more bits.
Make a hypothesis about how this could work.
Initially I thought it would be something signifyed in the last two bits of the byte. However, I don't think it is the order of the two because there is not consistency that I can see in the examples above.
Maybe it has to do with the number of bits total. Would it be possible for the initial characters of the alphabet to be signifyed by 8 bits and everything beyond that is assigned more than that. Or possibly there could be a built in loop of some kind at the 8th bit. It could run some sort of if, then sequence. If the first 8 are a byte that creates a letter, then the computer generates that character. If the first 8 is not a byte that correlates with a character, then it continues to read the bites until it does.