Random Variables & Distribution Functions MCQ Quiz in தமிழ் - Objective Question with Answer for Random Variables & Distribution Functions - இலவச PDF ஐப் பதிவிறக்கவும்

Last updated on Apr 9, 2025

பெறு Random Variables & Distribution Functions பதில்கள் மற்றும் விரிவான தீர்வுகளுடன் கூடிய பல தேர்வு கேள்விகள் (MCQ வினாடிவினா). இவற்றை இலவசமாகப் பதிவிறக்கவும் Random Variables & Distribution Functions MCQ வினாடி வினா Pdf மற்றும் வங்கி, SSC, ரயில்வே, UPSC, மாநில PSC போன்ற உங்களின் வரவிருக்கும் தேர்வுகளுக்குத் தயாராகுங்கள்.

Latest Random Variables & Distribution Functions MCQ Objective Questions

Top Random Variables & Distribution Functions MCQ Objective Questions

Random Variables & Distribution Functions Question 1:

Consider a scenario where\( Y_1, Y_2, ..., Y_n \)are independently and identically distributed \(N(μ, φ^{-2}) \)random variables, where \(φ^{-2 }> 0. \)Let the prior distribution on \(φ^2\) have density\( ρ(φ^2) ∝ (1/φ^2)β\) for some β > 0. Which would be correct to say?

  1. The prior distribution on φ^2 is an inverse gamma distribution
  2. The posterior distribution of \(φ^2\) after observing \(Y_1, ..., Y_n\) is proportional to \((1/φ^2)^{n/2 + β} \times e^{(-nȲ^2/2φ^2) }\)
  3. The joint prior distribution of \((μ, φ^2)\) is a normal-inverse gamma distribution when μ has a normal prior distribution with mean 0 and variance \(φ^2\)
  4. The posterior mean of\( φ^2 \)is \((nȲ^2 + β)/((n/2) + β - 1)\) when β > 1/2 and n is large E. The joint distribution of \((μ, φ^2)\) is a Pareto distribution.

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 1 Detailed Solution

Explanation -

(i). The prior distribution on \(φ^2 \) is an inverse gamma distribution

This statement is correct if β > 0. The inverse gamma distribution takes the form \(π(x) ∝ x^{-α }e^{-b/x},\) which in this case would translate to \(ρ(φ^2) ∝ (φ^2)^{-(β+1)},\) indicating that an inverse gamma distribution is applicable in this case.

(ii). The posterior distribution of \(φ^2\) after observing \(Y_1, ..., Y_n\) is proportional to \((1/φ^2)^{n/2 + β} \times e^{(-nȲ^2/2φ^2) }\)

This statement is also correct. Bringing together the likelihood function of the sample \(Y_1, ..., Y_n\) and the prior distribution of \(φ^2 \) yields the posterior distribution of this form.

(iii). The joint prior distribution of (μ, \(φ^2 \)) is a normal-inverse gamma distribution when μ has a normal prior distribution with mean 0 and variance \(φ^2 \)

This statement is correct. The situation described is essentially the definition of the normal-inverse gamma distribution.

(iv). The posterior mean of \(φ^2 \) is \((nȲ^2 + β)/((n/2) + β - 1)\)when β > 1/2 and n is large

This statement is not correct. The posterior distribution of \(φ^2 \) is an inverse gamma distribution and its mean (if it exists) does not hold the specified form. When β <= 1, the posterior mean does not exist.

Hence the option (i), (ii) and (iii) are correct.

Random Variables & Distribution Functions Question 2:

Let a continuous random variable X follow Uniform (−1, 1). Define Y = X2. Which of the following are NOT true for X and Y?

  1. They are independent and uncorrelated.
  2. They are independent but correlated.
  3. They are not independent but correlated.
  4. They are neither independent nor correlated.

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 2 Detailed Solution

Concept:

(i) Let X be a continuous random variable follows uniformly distribution U(a, b) then probability density function

f(x) = \(\frac{1}{b-a}\)

E(X) = \(\frac{a+b}{2}\) and

E(α(x)) = \(\int_a^bα(x)f(x)dx\)

(ii) Two random variable X and Y are non correlated if Cov(X, Y) = 0

Explanation:

A continuous random variable X follow Uniform (−1, 1)

so f(x) = \(\frac12\), E(X) = \(\frac{1-1}{2}\) = 0

Also Y = X2

So Y and X are dependent

Now,

Cov(X, Y) = E(XY) - E(X)E(Y)

                = E(XY) (as E(X) = 0)

               = E(X3) (since Y = X2)

               = \(\int_{-1}^1x^3.\frac12dx\) = 0 (as x3 is odd function)

Cov(X, Y) = 0

So X and Y are non-correlated

Therefor option (4) is TRUE and (1), (2), (3) are NOT TRUE

So option (1), (2), (3) is correct

Random Variables & Distribution Functions Question 3:

A cumulative hazard function H(t) of a non-negative continuous random variable satisfies which of the following conditions?

  1. \(\rm\displaystyle​\lim _{t \rightarrow ∞}\) H(t) = ∞
  2. H(0) = 0
  3. H(1) = 1
  4. H(t) is a nondecreasing function of t.

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 3 Detailed Solution

Concept:

The cumulative hazard function H(t) is defined by

H(t) = - \(\frac f R\) where R = 1- F, f be probability density function, R be the survival function and F be the cumulative density function

Properties of cumulative hazard function H(t):

(i) H(t) ≥ 0

(ii) H(t) is either decreasing of increasing or constant function

(iii) H(t) is unbounded function and \(\rm\displaystyle​\lim _{t \rightarrow ∞}\) H(t) = ∞

Explanation:

By the direct properties of H(t), (1), (2) and (4) are correct

We know that h(t) is unbounded and H(0) = 1 but it's not necessary that H(1) = 1

Option (3) is false

Random Variables & Distribution Functions Question 4:

Let {Xi: 1 ≤ i ≤ 2n} be independently and identically distributed normal random variables with mean μ and variance 1, and independent of a standard Cauchy random variable W. Which of the following statistics are consistent for μ?

  1. \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_i\)
  2. \(\rm n^{−1} \displaystyle\sum_{i=1}^{2 n} X_i\)
  3. \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_{2 i−1}\)
  4. \(\rm n^{−1}\left(\displaystyle\sum_{i=1}^n X_i+W\right)\)

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 4 Detailed Solution

Concept:

A is said to be consistent for mean μ if E(A) = μ as n → ∞ and Var(A) = 0 as n → ∞

Explanation:

{Xi: 1 ≤ i ≤ 2n} be independently and identically distributed normal random variables with mean μ and variance 1, and independent of a standard Cauchy random variable W

So Xi ∼ N(μ, 1) and W ∼ Cauchy(0, 1)

E(Xi) = μ, Var(Xi) = 1, E(W) = 0, Var(W) = 1

(1): \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_i\)

E(\(\rm n^{−1} \displaystyle\sum_{i=1}^n X_i\)) = \(\frac1n\displaystyle\sum_{i=1}^n E(X_i)\) = \(\frac1n. nμ\) = μ as n → ∞

Var(\(\rm n^{−1} \displaystyle\sum_{i=1}^n X_i\)) = \(\frac1{n^2}\displaystyle\sum_{i=1}^n Var(X_i)\) = \(\frac1{n^2}.n\) = \(\frac1{n}\) = 0 as n → ∞

Therefore \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_i\) is consistent.

Option (1) is correct

(2): \(\rm n^{−1} \displaystyle\sum_{i=1}^{2 n} X_i\)

E(\(\rm n^{−1} \displaystyle\sum_{i=1}^{2 n} X_i\)) = \(\frac1n\displaystyle\sum_{i=1}^{2n} E(X_i)\) = \(\frac1n.2nμ\) = 2μ as n → ∞

Therefore \(\rm n^{−1} \displaystyle\sum_{i=1}^{2 n} X_i\) is not consistent.

Option (2) is not correct

(3): \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_{2 i−1}\)

E(\(\rm n^{−1} \displaystyle\sum_{i=1}^n X_{2 i−1}\)) = \(\frac1n\displaystyle\sum_{i=1}^n E(X_{2i-1})\) = \(\frac1n. nμ\) = μ as n → ∞

Var(\(\rm n^{−1} \displaystyle\sum_{i=1}^n X_{2 i−1}\)) = \(\frac1{n^2}\displaystyle\sum_{i=1}^n Var(X_{2i-1})\) = \(\frac1{n^2}.n\) = \(\frac1{n}\) = 0 as n → ∞

Therefore \(\rm n^{−1} \displaystyle\sum_{i=1}^n X_{2 i−1}\) is consistent.

Option (3) is correct

(4): \(\rm n^{−1}\left(\displaystyle\sum_{i=1}^n X_i+W\right)\)

E(\(\rm n^{−1}\left(\displaystyle\sum_{i=1}^n X_i+W\right)\)) = \(\rm \frac1n\left(\displaystyle\sum_{i=1}^n E(X_i)+E(W)\right)\) = \(\frac1n(nμ+0)\) = μ as n → ∞

Var(\(\rm n^{−1}\left(\displaystyle\sum_{i=1}^n X_i+W\right)\)) = \(\rm \frac1{n^2}\left(\displaystyle\sum_{i=1}^n Var(X_i)+Var(W)\right)\) = \(\frac1{n^2}(n+1)\) = \(\frac1{n}+\frac{1}{n^2}\) = 0 as n → ∞

Therefore \(\rm n^{−1}\left(\displaystyle\sum_{i=1}^n X_i+W\right)\) is consistent.

Option (4) is correct.

Random Variables & Distribution Functions Question 5:

Let X1 and X2 be two independent random variables such that X1 follows a gamma distribution with mean 10 and variance 10, and X2 ∼ N(3, 4). Let f1 and f2 denote the density functions of Xand X2, respectively. Define a new random variable Y so that for y ∈ ℝ, it has density function

f(y) = 0.4 f1(y) + qf2(y)

Which of the following are true?

  1. q = 0.6
  2. E[Y] = 5.8
  3. Var(Y) = 3.04
  4. Y = 0.4X1 + qX2

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 5 Detailed Solution

Concept:

If f1 and f2 are two probability density functions then αf1 + βf2 is also probability density function if α + β = 1 

Explanation:

f1 and f2 denote the density functions of X1 and X2,

and f(y) = 0.4 f​1(y) + qf2(y) is probability density function then

Y = 0.4X1 + qX2 is a random varibale and 

0.4 + q = 1 ⇒ q = 0.6

Option (1), (4) are correct

Given X1 follows a gamma distribution with mean 10 and variance 10, and X2 ∼ N(3, 4)

So E[X1] = 10, Var[X1] = 10, E[X2] = 3, Var[X2] = 4 

E[Y] = 0.4E[X1] + qE[X2] = 0.4 × 10 + 0.6 × 3 = 5.8

Option (2) is correct

Var[Y] = (0.4)2Var[X1] + q2E[X2] = (0.4)2 × 10 + (0.6)2 × 4 = 3.04

Option (3) is correct

Random Variables & Distribution Functions Question 6:

Let X1, X2, ... be i.i.d. random variables having a χ2-distribution with 5 degrees of freedom.
Let a ∈ \(\mathbb{R}\) be constant. Then the limiting distribution of \(a\left(\frac{X_1+\cdots+X_n-5 n}{\sqrt{n}}\right)\) is 

  1. Gamma distribution for an appropriate value of a
  2. χ2-distribution for an appropriate value of a
  3. Standard normal distribution for an appropriate value of a
  4. A degenerate distribution for an appropriate value of a

Answer (Detailed Solution Below)

Option 3 : Standard normal distribution for an appropriate value of a

Random Variables & Distribution Functions Question 6 Detailed Solution

Given:-

X1, X2, ... are i.i.d. random variables having a χ2-distribution with 5 degrees of freedom.

Concept Used:-

The limiting distribution of the given expression can be found using the central limit theorem.

The central limit theorem states that the sum of many independent and identically distributed random variables, properly normalized, converges in distribution to a normal distribution.

Explanation:-

Here, we have n i.i.d. random variables with a χ2-distribution with 5 degrees of freedom.

The mean of each χ2-distributed variable is 5 and the variance is,

2 × 5 = 10

Therefore, the mean of the sum of n such variables is n5, and the variance is,

⇒ variance = (n × 10)

We can normalize the expression by subtracting the mean and dividing by the standard deviation. That is,

\(⇒a[\dfrac{(X_1 + X_ 2 + ... + X_ n - 5n)}{\sqrt{n×10}}] \)

\(=(\dfrac{a}{\sqrt{10}}) [\dfrac{(X_ 1 + X_ 2 + ... + X_ n - 5n)}{n}] \sqrt{n} \)

The term in the brackets on the right-hand side is the sum of n i.i.d. random variables with a mean of 0 and a variance of 1/2.

Therefore, by the CLT, this term converges in distribution to a standard normal distribution as n goes to infinity.

The overall expression converges in distribution to a normal distribution with mean zero and variance a2/10.

So, the limiting distribution of \(a\left(\frac{X_1+\cdots+X_n-5 n}{\sqrt{n}}\right)\) is ​​the standard normal distribution for an appropriate value of a.

Hence, the correct option is 3.

Random Variables & Distribution Functions Question 7:

Suppose that a sequence of random variables {Xn}n ≥ 1 and the random variable X are defined on the same probability space. Then which of the following statements are true? 

  1. Xn converges to X almost surely as n → ∞ implies that Xn, converges to X in probability as n → ∞. 
  2. Xn converges to X in probability as n → ∞ implies that Xn, converges to X almost surely as n → ∞
  3. \(\rm \Sigma_{n=1}^\infty P||X_n-X|>δ|<\infty\) for all δ > 0, then Xn converges to X almost surely as n → ∞
  4. If Xn converges to X in distribution as n → ∞, and X is a constant with probability 1, then Xn, converges to X in probability as n → ∞ . 

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 7 Detailed Solution

We will update the solution soon.

Random Variables & Distribution Functions Question 8:

Consider a linear model 

Yi = β1 + β2 + .... + βi + ϵi, 1 ≤ i ≤ n,

where errors ϵi’s are uncorrelated with zero mean and finite variance σ2 > 0. Let β̂i be the best linear unbiased estimator (BLUE) of ;, i = 1,2,...,n. Then, which of the following statements are true? 

  1. The sum of squares residuals is strictly positive with probability 1 . 
  2. For every βi 1 ≤ i ≤ n, there are infinitely many linear unbiased estimators. 
  3. Var(β̂1) = σ2
  4. Y3 - Y2 is the BLUE of β3.

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 8 Detailed Solution

We will update the solution soon.

Random Variables & Distribution Functions Question 9:

Let X be a random variable with the cumulative distribution function (CDF) given by \(F(x) = \begin{cases} 0, & x < 0, \\ \frac{x+2}{5}, & 0 \leq x < 3, \\ 1, & x \geq 3. \end{cases}\)

Find the value of \(P\left(1 < X \leq 2\right) + P(X = 0) \) ?

  1. 1/5
  2. 2/5
  3. 3/5
  4. 4/5

Answer (Detailed Solution Below)

Option 1 : 1/5

Random Variables & Distribution Functions Question 9 Detailed Solution

Solution:  

we use the properties of the cumulative distribution function (CDF).

Step 1: Calculate \(P(1 < X ≤ 2)\)

Using the CDF properties, for \(a < X ≤ b\) :

\(P(a < X ≤ b) = F(b) - F(a)\).

Here, a = 1 and b = 2 .

For 0 ≤ x < 3 , the CDF is given by \(F(x) = \frac{x+2}{5}\) .

Substituting these values:

\(F(2) = \frac{2 + 2}{5} = \frac{4}{5}, \quad F(1) = \frac{1 + 2}{5} = \frac{3}{5}.\)

Thus:

\(P(1 < X ≤ 2) = F(2) - F(1) = \frac{4}{5} - \frac{3}{5} = \frac{1}{5}.\)

Step 2: Calculate P(X = 0)

The probability P(X = 0) corresponds to the probability mass at X = 0 .

Since the given random variable is continuous, P(X = 0) = 0 .

Step 3: Add the probabilities

Combining the results:

\(P(1 < X ≤ 2) + P(X = 0) = \frac{1}{5} + 0 = \frac{1}{5}.\)

Hence the correct option is (1)

 

Random Variables & Distribution Functions Question 10:

Let X and Y be jointly distributed continuous random variables with joint probability density function \(\rm f(x, y)=\left\{\begin{matrix}\frac{x}{y}, & if\ 0

Which of the following statements are true? 

  1. \(\rm P\left(X<\frac{1}{2}|Y=1\right)=\frac{1}{4}\)
  2. E(Y) = \(\frac{1}{4}\)
  3. \(\rm P\left(X < \frac{Y}{2}\right)=\frac{1}{4}\)
  4. \(\rm E\left(\frac{Y}{X}\right)=\frac{1}{4}\)

Answer (Detailed Solution Below)

Option :

Random Variables & Distribution Functions Question 10 Detailed Solution

The Correct answers are (1) and (3).

We will update the solution later.
Get Free Access Now
Hot Links: teen patti jodi teen patti list teen patti rich teen patti cash game teen patti mastar