To expand on Ben's answer a bit (thought as Ben said, it really isn't an OS question so much as a general assembly porgramming question): the
XOR instruction performs a bitwise logical operation known as 'exclusive or', in which the value is true if and only if exactly one of the two logical values (in this case, stored as individual bits, with false == 0, true == 1) it is applied to are true.
XOR is one of several bitwise logical operations, the other common ones being
AND,
OR, and
NOT. The general Boolean truth tables for the common logical operations are
Unary Not:Code:
F | T
-------
T | F
And:
Code:
| F | T
-----------
F | F | F
-----------
T | F | T
Regular (Inclusive) Or:
Code:
| F | T
-----------
F | F | T
-----------
T | T | T
Exclusive Or:
Code:
| F | T
-----------
F | F | T
-----------
T | T | F
Since these instructions operate on the bits in a data word, they perform these operation on the individual bits, paired up in the two pieces of data. So, if you have one byte
x that holds the binary value 1100 1001 (splitting the nybbles just to make it easier to read)and another byte
y that has the binary value 0001 1010, then the results would be
Code:
NOT x
1100 1001
-----------
0011 0110
AND x, y
1100 1001
0001 1010
-----------
0000 1000
OR x, y
1100 1001
0001 1010
-----------
1101 1011
XOR x, y
1100 1001
0001 1010
-----------
1101 0011
Now, here's the trick being used: if you XOR any value against itself, all the set bits cancel, clearing the entire datum. In other words,
Code:
XOR n, n == 0, for all n
Note that these bitwise operators are not specific to assembly language; these same AND, OR, XOR, and NOT operations that are performed by the '&' (ampersand), `|` (vertical bar, or pipe), `^` (caret) and `~` (tilde) operators in C and related languages:
Code:
uint8_t a, b, c, d, n, m, x, y;
x = 0xC9; // == binary 11001001 == decimal 201
y = 0x1A; // == binary 00011010 == decimal 26
a = ~x; // == binary 00110110 == dec 54 == hex 36
b = x & y; // == binary 00001000 == dec 8 == hex 08
c = x | y; // == binary 11011011 == dec 219 == hex DB
d = x ^ y; // == binary 11010011 == dec 211 == hex D3
n = x ^ x; // == binary 00000000 == dec 0 == hex 00
m = y ^ y; // == binary 00000000 == dec 0 == hex 00
I hope this helps, because to be honest, this something you really ought to have down solid
before jumping into OS-Dev.
As has already been said, the main reason this is sometimes used is because, on the x86 instruction set (and several others), the XOR operation (also sometimes called EOR or something similar on different instruction set architectures) is encoded in fewer bytes than the MOV operation, and can also be faster in some implementations, too (for example, some of the early 8088 models). Also, as Ben mentioned, it clears the Overflow and Carry flags in the FLAGS register, which MOV doesn't, and that's sometimes useful to do when clearing a register.
This isn't universal, however; for example, both the size and the number of cycles used by the equivalent instructions are the same either way on ARM CPUs, and in the MIPS CPU, the
MOVE <regX>, <regY> pseudo-instruction is just an alias for the
OR <regX>, $zero, <regY> instruction (the register
$0 - also called
$zero - is permanently set to zero), while
CLEAR <regX> is just
OR <regX>, $zero, $zero. The Flags/Status register issues are different, too; the CPSR (Current Processor Status Register) in the ARM design behaves differently from the x86 FLAGS register, while MIPS doesn't have a status register, period (at least not one used for this purpose).