Information Theory MCQ Quiz in मल्याळम - Objective Question with Answer for Information Theory - സൗജന്യ PDF ഡൗൺലോഡ് ചെയ്യുക
Last updated on Mar 10, 2025
Latest Information Theory MCQ Objective Questions
Top Information Theory MCQ Objective Questions
Information Theory Question 1:
A voice grade AWGN telephone channel has a bandwidth of 4.0 kHz and two-sided \(PSD~\frac{{{\eta _o}}}{2} = 2.5 \times {10^{ - 5}}\) watts/Hz. If the information at a rate of 52 kbps is to be transmitted then minimum bit energy (mJ/bit) for negligibly small error rate is _______.
Answer (Detailed Solution Below) 30 - 32
Information Theory Question 1 Detailed Solution
According to the noisy-channel coding theorem,
the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.
∴ As small error probability is required: C = 52 kbps
B = 4 kHz
\(\frac{{{N_0}}}{2} = 2.5 \times {10^{ - 5}}\)
N = No B = 4 × 103 × 2.5 × 10-5 × 2
\(C = B{\log _2}\left( {1 + \frac{S}{N}} \right)\)
\(\frac{C}{B} = lo{g_2}\left( {1 + \frac{S}{N}} \right)\)
\(1 + \frac{S}{N} = {2^{C/B}} = {2^{13}} = 8192\)
\(\frac{S}{N} = 8191\)
S = 8191 × 4 × 103 × 2.5 × 10-5 × 2 = 819.1 × 2
\({E_b} = \frac{S}{{{R_b}}}\)
\({E_b} = \frac{{819.1 \times 2}}{{{R_b}}}\)
\({E_b} = \frac{{819.1 \times 2}}{{52}} = 31.503\)Information Theory Question 2:
__________ is the informal networks of communication that intersect several path, circumvent rank or authority and can link organizational members in any combination or direction.
Answer (Detailed Solution Below)
Information Theory Question 2 Detailed Solution
- Grapevine is a casual business communication channel.
- Its name stems from the fact that it extends in all directions throughout the company, regardless of authority levels.
- The phrase "heard through the grapevine" refers to information obtained through rumors or gossip that is informal and unofficial.
- The typical interpretation is that the information was spread orally among friends or coworkers, sometimes in a private manner.
- Even if there are official channels in an organization, informal channels usually arise from interactions between members of the organization.
Important Points
- To achieve goals, managers in informational jobs create, acquire, and disseminate knowledge with staff members and superior colleagues.
- All designs, including e-learning design, should adhere to the universal design concept of the hierarchy of information.
- Combining data from disparate sources with various conceptual, contextual, and typographic representations is known as information integration (II).
Information Theory Question 3:
_______ tells if the transmission rate is less than channel capacity, then there exists _______ that permit error free transmission.
Answer (Detailed Solution Below)
Information Theory Question 3 Detailed Solution
Shannon–Hartley law:
It states the channel capacity C, meaning the theoretical highest upper bound on the information rate of data that can be communicated at an arbitrarily low error rate using an average received signal power S through an analog communication channel that is subject to additive white Gaussian noise (AWGN) of power N.
Mathematically, it is defined as:
\({\rm{C}} = {\rm{B\;lo}}{{\rm{g}}_2}\left( {{\rm{\;}}1 + \frac{{\rm{S}}}{{\rm{N}}}{\rm{\;}}} \right)\)
C = Channel capacity
B = Bandwidth of the channel
S = Signal power
N = Noise power
∴ It is a measure of capacity on a channel. And it is impossible to transmit information at a faster rate without error.
Forward error correcting codes also known as channel coding uses adding redundant bits in transmitted bit stream using Error Correcting codes before transmitting through the channel. It avoids needing retransmission of signal to check for errors.
Information Theory Question 4:
If there are M messages and each message has probability p = 1/M, the entropy is
Answer (Detailed Solution Below)
Information Theory Question 4 Detailed Solution
Concept:
The entropy of a probability distribution is the average or the amount of information when drawing from a probability distribution.
It is calculated as:
\(H=\underset{i=1}{\overset{n}{\mathop \sum }}\,{{p}_{i}}{{\log }_{2}}\left( \frac{1}{{{p}_{i}}} \right)bits/symbol\)
pi is the probability of the occurrence of a symbol.
Calculation:
With p = 1/M, for each of the M messages, the entropy will be:
\(H=\underset{i=1}{\overset{M}{\mathop \sum }}\,{\frac{1}{M}}{{\log }_{2}}\left( \frac{1}{{1/M}} \right)bits/symbol\)
\(H=\frac{1}{M}{{\log}_{2}}M\underset{i=1}{\overset{M}{\mathop \sum }}bits/symbol\)
\(H=\frac{1}{M}{{\log}_{2}}M\times M\)
\(H={\log _2}M\)
Information Theory Question 5:
If the SNR of 8 kHz white bandlimited Gaussian channel is 25 dB the channel capacity is:
Answer (Detailed Solution Below)
Information Theory Question 5 Detailed Solution
Concept:
The capacity of a band-limited AWGN channel is given by the formula:
\(C = B~{\log _2}\left( {1 + \frac{S}{N}} \right)\)
C = Maximum achievable data rate (in bits/sec)
B = channel bandwidth
\(\frac{S}{N}\) = signal Noise power (in W)
In the expression of channel capacity, S/N is not in dB.
Calculation:
Given
B.W. = 8 kHz, S/N = 25 dB
Since the S/N ratio is in dB,
\(10~log_{10}(\frac{S}{N})=25\)
\(log_{10}(\frac{S}{N})=2.5\)
\(\frac{S}{N}=10^{2.5}~W=316.22~W\)
The channel capacity will be:
\(C = (8×10^3)~{\log _2}( {1 + 316.22})\)
C = 66.47 kbps
Information Theory Question 6:
Channel capacity of a noise-free channel having m symbols is given by:
Answer (Detailed Solution Below)
Information Theory Question 6 Detailed Solution
Concept:
Channel capacity is defined as an intrinsic ability of a communication channel to convey information.
Channel capacity per symbol is defined as:
\({C_s} = \mathop {\max }\limits_{\left\{ {p\left( {{x_i}} \right)} \right\}} I\left( {x;y} \right)b/symbol\)
Where:
\(\mathop {{\rm{max}}}\limits_{\left\{ {p\left( {{x_i}} \right)} \right\}} \) = The maximization is over all possible input probability distribution on X.
I (x ; y) = Mutual Information of the channel defined as:
I(x ; y) = H(x) – H(x|y) ‘OR’ I(y ; x) = H(y) – H(y|x)
H(x|y) = conditional entropy
Analysis:
Noise free channel is both lossless and deterministic, i.e.
Noise free channel = Loseless + deterministic
For loseless channel,
H(x|y) = 0
I(x ; y) = H(x)
So,
\({C_s} = \mathop {{\rm{max}}}\limits_{\left\{ {p\left( {{x_i}} \right)} \right\}} H\left( x \right) = {\log _2}m\)
Where,
m = numbers of symbols in X
For deterministic channel
H(y|x) = 0
I(x ; y) = H(y)
So,
\({C_s} = \mathop {{\rm{max}}}\limits_{\left\{ {p\left( {{x_i}} \right)} \right\}} H\left( y \right) = {\log _2}n\)
Where, n = number of symbols in Y
Hence, for Noiseless channel, we can write:
I(x ; y) = H(x) = H(y), i.e.
Cs = log2 m = log2 n
Information Theory Question 7:
The minimum Hamming distance in a block code must be _________ for t bits in error.
Answer (Detailed Solution Below)
Information Theory Question 7 Detailed Solution
The minimum Hamming distance dmin for a block code must be:
1) dmin ≥ s + 1
2) dmin ≥ 2t + 1
3) dmin ≥ t + s + 1
s = Number of errors that can be detected
t = Number of errors that can be corrected
So Option (4) is correct.
Information Theory Question 8:
In _______ code, symbols 1 and 0 are represented by transmitting pulses of amplitudes + and -.
Answer (Detailed Solution Below)
Information Theory Question 8 Detailed Solution
There are three types of line coding:
- Unipolar
- Polar
- Bi-Polar
Polar Schemes:
- In polar schemes, the voltages are on the both sides of the axis.
- Here we use two levels of amplitude (voltages). For NRZ-L (NRZ-Level), the level of the voltage determines the value of the bit, typically binary 1 maps to logic-level high, and binary 0 maps to logic-level low, i.e. symbols 1 and 0 are represented by transmitting pulses of amplitudes + and -
- For NRZ-I (NRZ-Invert), two-level signal has a transition at a boundary if the next bit that we are going to transmit is a logical 1, and does not have a transition if the next bit that we are going to transmit is a logical 0.
Unipolar Non-return to zero (NRZ):
- It is a unipolar (all the signal levels are either above or below the axis) line coding scheme in which positive voltage defines bit 1 and the zero voltage defines bit 0.
- The signal does not return to zero at the middle of the bit thus it is called NRZ. For example: Data = 10110.
Bipolar Schemes: In this scheme there are three voltage levels positive, negative, and zero. The voltage level for one data element is at zero, while the voltage level for the other element alternates between positive and negative.
Important Note: A line code is a code used for data transmission of a digital signal over a transmission line. This process of coding is chosen to avoid overlap and distortion of signal such as inter-symbol interference.
Information Theory Question 9:
A discrete memoryless source has an alphabet {a1, a2, a3, a4} with corresponding probabilities \(\left\{ {\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{8}} \right\}\). The minimum required average codeword length in bits to represent this source for error-free reconstruction is ________
Answer (Detailed Solution Below) 1.75
Information Theory Question 9 Detailed Solution
Concept:
Information associated with the event is “inversely” proportional to the probability of occurrence.
Entropy: The average amount of information is called the “Entropy”.
\(H = \;\mathop \sum \limits_i {P_i}{\log _2}\left( {\frac{1}{{{P_i}}}} \right)\;bits/symbol\)
Calculation:
Given symbols a1, a2, a3, a4, and their
probabilities are 1/2, 1/4, 1/8, 1/8
\( H = {\frac{1}{2}{{\log }_2}2 + \frac{1}{4}{{\log }_2}4 + \frac{1}{8}{{\log }_2}8 + \frac{1}{8}{{\log }_2}8} \)
H = 0.5 + 0.5 + 0.375 + 0.375
H = 1.75 bits/symbol
The minimum required average codeword length is 1.75 bits/symbol.
Information Theory Question 10:
Discrete source S1 has 4 equiprobable symbols while discrete source S2 has 16 equiprobable symbols. When the entropy of these two sources is compared, the entropy of:
Answer (Detailed Solution Below)
Information Theory Question 10 Detailed Solution
Concept:
Entropy of mutual information H(X):
\(H\left( X \right) = \mathop \sum \nolimits_{i = 1}^n p\left( {{x_i}} \right).{\log _2}\frac{1}{{p\left( {{x_i}} \right)}} = {\log _2}m\)
[for equiprobable ‘m’ no. of symbol]
Calculation:
Case-I: For source S1;
No. of symbol; m = 4
H(x) = log2 m
= 2 bits / symbol
Case-II: For source S2;
No. of symbol; m = 16
H(x) = log2 m
= 4 bits / symbol
∴ entropy of S1 is less than S2
Important Points
1) Entropy for a continuous random variable is known as differential entropy:
\(H\left( X \right) = \mathop \smallint \nolimits_{ - \infty }^\infty {F_X}\left( x \right).{\log _2}\frac{1}{{{F_X}\left( x \right)}}\)
FX(x) ⇒ PDF of RV ‘X’
2) Entropy is a measure of uncertainty.