Whether the code produced by CRC is a cyclic code?

Wikipedia says that CRC algorithm is based on cyclic codes, but it doesn’t say that it is a cyclic code. If I understood correctly, a linear code of length $n$ called cyclic if and only if its generator polynomial divides $x^n-1$. So I think that in general CRC codes are not cyclic. Is it true? And if yes, why do they called it “Cyclic Redundancy Check/Code” rather than, for example, “Polynomial Redundancy Code”?

Solutions Collecting From Web of "Whether the code produced by CRC is a cyclic code?"

Minus typical bells and whistles such as first 16 bits are complemented etc., a CRC-encoded transmission
($k$ data bits followed by CRC bits) can be viewed as a codeword
in a systematic cyclic code. In a practical implementation, $k$ is
restricted to be a few thousand bits at most while the cyclic code
has much larger block length.

For example, with CRC-16, the code is a $[2^{15}-1,2^{15}-17]= [32767, 32751]$ single-error-correcting
double-error-detecting expurgated Hamming code whose generator polynomial (the CRC-16 polynomial) is of the form $(x+1)p(x)$ where $p(x)$ is
a primitive polynomial of degree $15$. So, if $1600$ data bits
are to be transmitted, the actual transmission is $1616$ bits, and this
transmission is a codeword in the $[32767, 32751]$ systematic cyclic code. However, the leading
$32751-1600 = 31151$ data bits in this codeword are $0$, and
they are ignored at the transmitter as well as the receiver, that is,
they are not transmitted at all. What is
transmitted is just the $1600+16 = 1616$ bits which are at the low-order
end of the codeword polynomial. At the receiver, the system knows
that the high-order bits, however many they might be, are all $0$
and can be ignored: it simply processes the $1616$ bits it has received
as a polynomial of degree $1615$, and checks whether or not the
received polynomial is a multiple of the CRC-16 polynomial or not.
The next transmission might have only $256$ data bits in it. No matter:
the receiver processes only $256+16 = 272$ bits. So, how can the receiver
tell if it is supposed to process $1616$ bits or $272$ bits? It doesn’t
need to know! The receiver simply processes the received bit
stream until an “end-of-transmission” (EOT) or “end-of-file” (EOF)
marker is reached. At that point,
if the “division by CRC-16 polynomial” process has produced a $0$
remainder, the received data bits (everything in the received stream
of bits except the last $16$ bits which are the CRC bits) are
accepted as correct; if not,
a re-transmission is requested. Note that no attempt is made to use
the error-correcting capability of the code; it is used for error
detection only.

TL; DR version Yes, CRC transmissions are codewords in a
systematic cyclic code but nobody bothers to think about the
transmissions in this way, or to exploit this notion in any way.