Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- \documentclass[12pt]{exam}
- \usepackage[utf8]{inputenc}
- \usepackage[margin=1in]{geometry}
- \usepackage{amsmath,amsthm,amssymb}
- \usepackage{multicol}
- \usepackage{graphicx}
- \usepackage{float}
- \usepackage{parskip}
- \usepackage[framed,numbered,autolinebreaks,useliterate]{mcode}
- \graphicspath{ {./images/} }
- \newcommand{\class}{Mathematics (MTH 225B)}
- \newcommand{\term}{Spring 2020}
- \newcommand{\examnum}{Final Exam Solution}
- \newcommand{\examdate}{7/6/2020}
- \newcommand\ddfrac[2]{\frac{\displaystyle #1}{\displaystyle #2}}
- \pagestyle{head}
- \runningheader{\class}{\examnum\ - Page \thepage\ of \numpages}{\examdate}
- \runningheadrule
- \renewcommand{\theenumi}{\alph{enumi}}
- \renewcommand*\labelenumi{(\theenumi)}
- \begin{document}
- \noindent
- \begin{tabular*}{\textwidth}{l @{\extracolsep{\fill}} r @{\extracolsep{6pt}} l}
- \textbf{\class} & \textbf{Name:} & \makebox[2in]{Youssef Walid Hassan}\\
- \textbf{\term} &&\\
- \textbf{\examnum} &&\\
- \textbf{\examdate} &&\\
- \end{tabular*}\\
- \
- \rule[2ex]{\textwidth}{2pt}
- \textbf{\underline{Question 1}}
- \textbf{[I] One packet:} We can model this as a Geometric Random Variable or as a special Bernoulli Process Random Variable as it represents a set of successive failures and then a success at the end.
- $X$: the number of trials required for the packet to reach Samira.\
- \begin{enumerate}
- \item The expected number of trials of a Geometric Random Variable is known to be: $E[X] = 1/p$, where $0<p<1$.\
- \begin{figure}[H]
- \centering
- \includegraphics[width=410px]{prob1a}
- \caption{The expected number of trials of transferring one packet - part (a)}
- \end{figure}
- The Octave code for plotting the curve:
- \begin{lstlisting}
- p = [0.01:0.0001:1];
- plot(p, 1./p, 'LineWidth', 1.5);
- xlabel("p: the probability of success of a trial");
- ylabel("E[X]: the expected number of trials");
- title("Expected number of trials to successfully send a packet");
- grid on;
- \end{lstlisting}
- \item The distribution of $X$ follows a Geometric Random Variable distribution which is expressed by $f(x) = q^{x-1} \cdot p$.
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1b}
- \caption{f(x): the pdf of X - part (b)}
- \end{figure}
- The Octave code for the plot above:
- \begin{lstlisting}
- hold on;
- xlabel("X: the number of trials");
- ylabel("f(x): probability mass function of X");
- for p = 0.2:0.2:0.8
- x = [1:0.1:12];
- fx = (1-p).^(x.-1) * p;
- plot(x, fx, 'LineWidth', 1.5);
- endfor
- legend("p = 0.2", "p = 0.4", "p = 0.6", "p = 0.8");
- hold off;
- \end{lstlisting}
- \textbf{[II] Simulation:} \item We can generate the plot by looping through $p$ from $p$ = 0.1 to $p$ = 0.9 with a step of 0.05 and generating random values until the trial turns to be a success, keeping the count of the trials in mind.\
- The Octave code for part (c) is as follows:
- \begin{lstlisting}
- X = [];
- for p = 0.1:0.05:0.9
- number = rand; % Generate continuous uniform number bet. 0 & 1.
- trials = 1;
- while(number >= p)
- trials = trials + 1;
- number = rand;
- end
- X = [X trials];
- end
- p = [0.1:0.05:0.9];
- plot(p, X, 'LineWidth', 1.5);
- xlabel("p: the probability of success");
- ylabel("X: the number of trials");
- \end{lstlisting}
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1c}
- \caption{Number of trials versus the probability of success p - part (c)}
- \end{figure}
- \item Running the above program 10 times we get the following curve:
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1d}
- \caption{Plotting 10 curves of the number of trials - part (d)}
- \end{figure}
- \begin{lstlisting}
- hold on;
- p_domain = [0.1:0.05:0.9];
- for i = 1:10
- X = [];
- for p = 0.1:0.05:0.9
- number = rand; % Generate continuous uniform number bet. 0 & 1.
- trials = 1;
- while(number >= p)
- trials = trials + 1;
- number = rand;
- end
- X = [X trials];
- end
- plot(p_domain, X, 'LineWidth', 1.5);
- end
- xlabel("p: the probability of success");
- ylabel("X: the number of trials");
- hold off;
- \end{lstlisting}
- \item The curve in (d) contains multiple random experiments, some are a good fit to the curve in (a) and some are not, this is due to the randomness used to generate the curve, if we increase the number of iteratrions in curve (d), it will approach the curve in (a) more and more.
- \textbf{[III] One message:} \item The expected number of trials of 800 packets in sequence is 800 times the expected number of trials of one packet. $E[X] = n \cdot 1/p = 800/p$.
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1f}
- \caption{The expected number of trials to send 800 packets - part (f)}
- \end{figure}
- The Octave code for the plot above:
- \begin{lstlisting}
- p = [0:0.007:1];
- plot(p, 800./p, 'LineWidth', 1.5);
- xlabel("p: the probability of success of a trial");
- ylabel("E[X]: the expected number of trials");
- title("Expected number of trials to successfully send 800 packets");
- \end{lstlisting}
- \pagebreak
- \item The MATLAB code in this part is similar to part (c) except that we need to iterate 800 times for each $p$.
- \begin{lstlisting}
- X = [];
- for p = 0.1:0.05:0.9
- trials = 1;
- for i = 1:800
- number = rand; % Generate continuous uniform number bet. 0 & 1.
- while(number >= p)
- trials = trials + 1;
- number = rand;
- end
- end
- X = [X trials];
- end
- p = [0.1:0.05:0.9];
- plot(p, X, 'LineWidth', 1.5);
- xlabel("p: the probability of success");
- ylabel("X: the number of trials");
- \end{lstlisting}
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1g}
- \caption{The number of trials - part (g)}
- \end{figure}
- \pagebreak
- \textbf{[IV] Computer Network:} We can calculate the probability that a packet is delivered over the network by subtracting the probability of failure of the network from 1, to get the probability of failure of the network we need all 3 networks R1, R2, and R3 to fail delivering the packet altogether.
- \begin{align*}
- P_{\text{delivering the message}} &= 1 - P_{\text{failure to deliver the message}}\\
- &= 1 - (R1' \cap R2' \cap R3')\\
- & = 1 - (1-p)^2 \cdot (1-p)^2 \cdot (1-p)^2 \\
- &= 1 - (1-p)^6
- \end{align*}
- \item We can model the transmission of a message as a Geometric Random Variable with probability of success $p'$ = $P_{\text{delivering the message}}$ calculated above, therefore the expected number of trials of this Geometric Random Variable $E[X]$ can be calculated as follows: $E[X] = 1/p'$.
- \begin{figure}[H]
- \centering
- \includegraphics[width=450px]{prob1h}
- \caption{The expected number of trials of sending the message over the network - part (h)}
- \end{figure}
- \pagebreak
- The code for the plot above:
- \begin{lstlisting}
- p = [0.01:0.0001:1];
- P = 1 - (1-p).^6;
- plot(p, 1./P, 'LineWidth', 1.5);
- xlabel("p: the probability of success of a trial");
- ylabel("E[X]: the expected number of trials");
- title("Expected number of trials to successfully send a message");
- \end{lstlisting}
- \item We can plot the two curves over each other to get a clear view of when any case is better than the other, plotting them together we get:
- \end{enumerate}
- \begin{figure}[H]
- \centering
- \includegraphics[width=400px]{prob1i}
- \caption{Plotting One Packet curve over Computer Network - part (i)}
- \end{figure}
- \begin{lstlisting}
- hold on;
- p = [0.01:0.0001:1];
- P = 1 - (1-p).^6;
- plot(p, 1./p, 'LineWidth', 1.5);
- plot(p, 1./P, 'LineWidth', 1.5);
- xlabel("p: the probability of success of a trial");
- ylabel("E[X]: the expected number of trials");
- title("Expected number of trials to successfully send a packet");
- legend("One Packet", "Computer Network");
- hold off;
- \end{lstlisting}
- We immediately notice that the \textit{Computer Network} number of expectations curve is a lower bound to \textit{One Packet} which infers that transferring a message over the \textit{Computer Network} will \textbf{always} be better than transferring it on \textit{One Packet}.
- \textbf{\underline{Question 2}}
- \begin{enumerate}
- \item Since $f(x)$ is a valid pdf, integrating it over the whole domain should be equal to 1, we can calculate the integration of $f(x)$ and equate it to 1 to find the value of $k$.
- $$\int_{-\infty}^{\infty} f(x) dx = 1$$
- $$\int_{-3}^{3} k(16.5-x^2) dx = 1$$
- $$k \cdot (16.5x - \frac{x^3}{3})\Bigg|_{-3}^{3} = 1$$
- $$k \cdot (16.5(3) - \frac{(3)^3}{3} - 16.5(-3) + \frac{(-3)^3}{3}) = 1$$
- $$k \cdot 81 = 1$$
- \begin{center}
- \boxed{k = \frac{1}{81}}
- \end{center}
- \item We can find $P(X < 1.5)$ by integrating the pdf from the lowest limit to 1.5:
- \begin{align*}
- P(X < 1.5) &= \int_{-\infty}^{1.5} f(x) dx\\
- &= \int_{-3}^{1.5} \frac{1}{81}(16.5-x^2) dx\\
- &= \frac{1}{81} \cdot (16.5x - \frac{x^3}{3})\Bigg|_{-3}^{1.5}\\
- & = \frac{1}{81} \cdot (16.5(1.5) - \frac{(1.5)^3}{3} - 16.5(-3) + \frac{(-3)^3}{3})
- \end{align*}
- \begin{center}
- \boxed{P(X < 1.5) = \frac{19}{24} = 0.7916667}
- \end{center}
- \pagebreak
- \item We can find $P(|X| > 2.4)$ by expanding the absolute value and therefore calculating $P(X < -2.4 \cap X > 2.4)$ instead which is equal to calculating $1 - P(-2.4 < X < 2.4)$ which can be easily calculated directly from the pdf.
- \begin{align*}
- P(|X| > 2.4) &= P(X < -2.4 \cap X > 2.4) = 1 - P(-2.4 < X < 2.4)\\
- &= 1 - \int_{-2.4}^{2.4} f(x) dx\\
- &= 1 - \int_{-2.4}^{2.4} \frac{1}{81}(16.5-x^2) dx\\
- &= 1 - \frac{1}{81} \cdot (16.5x - \frac{x^3}{3})\Bigg|_{-2.4}^{2.4}\\
- &= 1 - \frac{1}{81} \cdot (16.5(2.4) - \frac{(2.4)^3}{3} - 16.5(-2.4) + \frac{(-2.4)^3}{3})
- \end{align*}
- \begin{center}
- \boxed{P(|X| > 2.4) = 0.136}
- \end{center}
- \end{enumerate}
- \textbf{\underline{Question 3}}
- \begin{enumerate}
- \item The total number of people with university education = 22 + 17 = 39\\
- $P(\text{the person has university education}) = \frac{39}{200} = 0.195$\
- \item The total number of males with secondary education = 28\\
- $P(\text{the person is a male with secondary education}) = \frac{28}{200} = 0.14$\
- \item The total number of people with secondary education = 78\\
- $P(\text{the person is a male} \vert \text{the person has a secondary education}) = \frac{28}{78} = 0.3589$\
- We can also calculate it by using the property $P(A | B) = P(A \cap B)/P(B) $\\
- $P(\text{the person is a male} \vert \text{the person has a secondary education}) = P(\text{the person is a male} \cap \text{the person has a secondary education}) / P(\text{the person has a secondary education}) = \frac{28}{200} / \frac{78}{200} = 28/78 = 0.3589$\
- \item The total number of females = 112 and the total number of females without university degree = 95\\
- $P(\text{the person does not have a university degree} \vert \text{the person is a female}) = \frac{95}{112} = 0.848214$\
- We can also calculate it by using the property $P(A | B) = P(A \cap B)/P(B) $\\
- $P(\text{the person does not have a university degree} \vert \text{the person is a female}) = \frac{95}{200} / \frac{112}{200} = 95/112 = 0.848214$\
- \item We can approach this by trying to prove that they are independent, and if they fail to be, then they are dependent.\
- \end{enumerate}
- \pagebreak
- \textbf{\underline{Question 4}}
- \begin{enumerate}
- \item Since $X$ follows the distribution $U(0,1)$ then:
- $$E(X) = \frac{1+0}{2} = 0.5 ~ \text{and} ~ Var(X) = \frac{(1-0)^2}{12} = \frac{1}{12} = 0.0833$$
- \item Using $E[aX+b] = a \cdot E[X] + b$ (linearity) we can get:
- $$E[2X + 3] = 2 \cdot E[X] + 3 = 2 \cdot 0.5 + 3 = 4$$
- and using $Var(aX+b) = a^2 \cdot Var(X)$ we get:
- $$Var(2X+3) = 2^2 \cdot Var(X) = 4 \cdot \frac{1}{12} = 0.3333$$
- \item Using $P(A~|~B) = P(A \cap B) / P(B)$:
- \begin{align*}
- P(X > 0.2~|~X \leq 0.5) &= P(X > 0.2 \cap X \leq 0.5) / P( X \leq 0.5)\\
- &= \ddfrac{\frac{0.5-0.2}{1.0 - 0}}{\frac{0.5 - 0}{1.0 - 0}}\\
- &= \frac{0.5-0.2}{0.5 - 0}
- \end{align*}
- \begin{center}
- \boxed{P(X > 0.2~|~X \leq 0.5) = 0.6}
- \end{center}
- \end{enumerate}
- \pagebreak
- \textbf{\underline{Question 5}}\
- \begin{enumerate}
- \item We can use the definition to find the marginal densities
- \begin{align*}
- f(x) &= \int_{-\infty}^{\infty} f(x,y) dy\\
- &= \int_{0}^{1} \frac{3x-y}{4} dy\\
- &= \frac{1}{4} \cdot (3xy - \frac{y^2}{2}) \Bigg|_{0}^{1}\\
- &= \frac{1}{4} \cdot (3x - \frac{1}{2})
- \end{align*}
- \begin{center}
- \boxed{f(x) = \frac{3x}{4} - \frac{1}{8}}
- \end{center}
- and
- \begin{align*}
- f(y) &= \int_{-\infty}^{\infty} f(x,y) dx\\
- &= \int_{1}^{2} \frac{3x-y}{4} dx\\
- &= \frac{1}{4} \cdot (\frac{3x^2}{2} - xy) \Bigg|_{1}^{2}\\
- &= \frac{1}{4} \cdot (\frac{3(4)}{2} - 2y - \frac{3}{2} + y)\\
- &= \frac{1}{4} \cdot (\frac{9}{2} - y )
- \end{align*}
- \begin{center}
- \boxed{f(y) = \frac{9}{8} - \frac{y}{4}}
- \end{center}
- \pagebreak
- \item Using $P( a < Y < b~|~ X = x) = \int_{a}^{b} f(y|x) dy$ and $f(y|x) = \dfrac{f(x,y)}{f_{X}(x)}$
- We need to find $f(y|x)$ first:
- \begin{align*}
- f(y|x) &= \dfrac{f(x,y)}{f_{X}(x)}\\
- &= \dfrac{(3x-y)/4}{(3x-\frac{1}{2})/4}\Bigg|_{x = \frac{4}{3}}\\
- &= \dfrac{3\frac{4}{3}-y}{3\frac{4}{3}-\frac{1}{2}}\\
- &= \dfrac{4-y}{4-\frac{1}{2}}\\
- &= \dfrac{4-y}{7/2}\\
- &= \dfrac{8-2y}{7}
- \end{align*}
- Now we can integrate $f(y|x)$ over the interval of $y$ to find $P(Y < 1/2~|~ X = 4/3)$
- \begin{align*}
- P(Y < 1/2~|~ X = 4/3) &= \int_{0}^{1/2} f(y|x) dy\\
- &= \int_{0}^{1/2} \dfrac{8-2y}{7} dy\\
- &= \frac{1}{7} \cdot (8y-y^2)\Bigg|_{y = 0}^{y = 1/2}\\
- &= \frac{1}{7} \cdot (8 \cdot \frac{1}{2}-(\frac{1}{2})^2)
- \end{align*}
- \begin{center}
- \boxed{P(Y < 1/2~|~ X = 4/3) = 0.5357}
- \end{center}
- \item We can find $P(Y < 1/2~|~X < 4/3)$ by integrating the joint probability density function $f(x,y)$
- $$P(Y < 1/2~|~X < 4/3) = \frac{\int_{0}^{1/2} \int_{1}^{4/3} f(x,y) \,dx\,dy}{\int_{1}^{4/3} f_{X}(x) dx}$$
- \pagebreak
- Evaluating $\int_{0}^{1/2} \int_{1}^{4/3} f(x,y) \,dx\,dy$:
- \begin{align*}
- \int_{0}^{1/2} \int_{1}^{4/3} f(x,y) \,dx\,dy &= \int_{0}^{1/2} \int_{1}^{4/3} (3x-y)/4 \,dx\,dy\\
- & = \frac{1}{4} \int_{0}^{1/2} (\frac{3x^2}{2}-xy)\Bigg|_{x = 1}^{x = 4/3}\,dy\\
- & = \frac{1}{4} \int_{0}^{1/2} (\frac{3(4/3)^2}{2}-\frac{4y}{3} - \frac{3(1)^2}{2} + y)\,dy\\
- & = \frac{1}{4} \int_{0}^{1/2} (\frac{7}{6}-\frac{y}{3})\,dy\\
- & = \frac{1}{4} (\frac{7y}{6}-\frac{y^2}{6}) \Bigg|_{y = 0}^{y = 1/2}\\
- & = \frac{1}{4} (\frac{7(1/2)}{6}-\frac{(1/2)^2}{6}) \Bigg|_{y = 0}^{y = 1/2}\\
- & = \frac{1}{4} (\frac{7(1/2)}{6}-\frac{(1/2)^2}{6})
- \end{align*}
- \begin{center}
- \boxed{\int_{0}^{1/2} \int_{1}^{4/3} f(x,y) \,dx\,dy = \frac{13}{96}}
- \end{center}
- Evaluating $\int_{1}^{4/3} f_{X}(x) dx$:
- \begin{align*}
- \int_{1}^{4/3} f_{X}(x) dx &= \int_{1}^{4/3} \frac{1}{4} \cdot (3x - \frac{1}{2}) dx\\
- & = \frac{1}{4} \cdot (\frac{3x^2}{2} - \frac{x}{2}) \Bigg|_{x = 1}^{x = 4/3}\\
- & = \frac{1}{4} \cdot (\frac{3(4/3)^2}{2} - \frac{(4/3)}{2} - \frac{3(1)^2}{2} + \frac{1}{2})
- \end{align*}
- \begin{center}
- \boxed{\int_{1}^{4/3} f_{X}(x) dx = \frac{1}{4}}
- \end{center}
- Now we can finally evaluate $P(Y < 1/2~|~X < 4/3)$ by dividing both expressions by each other as follows:
- $$P(Y < 1/2~|~X < 4/3) = \frac{13/96}{1/4}$$
- \begin{center}
- \boxed{P(Y < 1/2~|~X < 4/3) = \frac{13}{24} = 0.5416667}
- \end{center}
- \item They are \textbf{dependent} as the conditional probability of $Y$ given $X$ has different values as the condition on $X$ changes, i.e. The result in (b) is different from the result in (c) although they have the same domain on $Y$ but different domains on $X$, therefore they are dependent.
- \end{enumerate}
- \textbf{\underline{Question 6}}\
- \begin{enumerate}
- \item The general idea of linear regression is that we have some input data that are not necessarily linear, and we try to fit a straight line through those points in a way as to minimize the mean squared error (MSE) between the points and the straight line, this helps us to fit a model into our data that will make us able to predict future values according to the past input data. Linear Regression is used in many applications, and it is a concrete building block in machine learning as fitting a model on existing data is key to predicting future values accurately. Linear Regression is also the simplest method of fitting data as the relationship between the features and values is linear.\
- Since minimizing the mean square error is an important point in linear regression, we need to derive the formula of the least squares coefficients:\
- We are trying to fit a straight line of the form $y = b_0 + b_1 x$ on our data, therefore, our unknowns are $b_0$ and $b_1$, we need to find values for them such that the sum of the squares of the residuals is minimum. The residual sum of squares is often called the sum of squares of the errors about the regression line and is denoted by SSE. This minimization procedure for estimating the parameters is called the method of least squares. Hence, we shall find a and b so as to minimize:
- $$SSE = \sum_{i=1}^{n} e_i^2 = \sum_{i=1}^{n}(y_i - \hat y_i)^2 = \sum_{i=1}^{n}(y_i - b_0 -b_1 x_i)^2$$
- Differentiating SSE with respect to b0 and b1, we have
- $$\frac{\partial(SSE)}{\partial b_0} = -2\sum_{i=1}^{n} (y_i - b_0 -b_1 x_i), ~~ \frac{\partial(SSE)}{\partial b_1} = -2\sum_{i=1}^{n} (y_i - b_0 -b_1 x_i) x_i $$
- Setting the partial derivatives equal to zero and rearranging the terms, we obtain the equations (called the \textbf{normal equations}).
- $$nb_0 + b_1 \sum_{i=1}^{n} x_i = \sum_{i=1}^{n} y_i, ~~ b_0 \sum_{i=1}^{n} x_i + b_1 \sum_{i=1}^{n} x_i^2 = \sum_{i=1}^{n} x_i y_i$$
- which may be solved simultaneously to yield computing formulas for b0 and b1.\\
- Given the sample $\{(x_i,y_i); i =1 ,2,...,n\}$, the least squares estimates $b_0$ and $b_1$ are computed from the formulas
- $$b_1 = \frac{n \sum_{i=1}^{n}x_i y_i - (\sum_{i=1}^{n}x_i) (\sum_{i=1}^{n} y_i)}{n\sum_{i=1}^{n}x_i^2 - (\sum_{i=1}^{n}x)^2} = \frac{\sum_{i=1}^{n}(x_i - \bar x)(y_i - \bar y)}{\sum_{i=1}^{n} (x_i - \bar x)^2}$$ and
- $$b_0 = \frac{\sum_{i=1}^{n}y_i - b_1 \sum_{i=1}^{n} x_i}{n} = \bar y - b_1 \bar x$$
- \item Applying the expressions to find the coefficients we can find the best fit line as shown in the plot
- \begin{figure}[H]
- \centering
- \includegraphics[width=400px]{prob6b}
- \caption{The best fit to the data in question 6}
- \end{figure}
- \end{enumerate}
- \textbf{\underline{Question 7}}\
- \begin{enumerate}
- \item We know that $P(\bar X - t_{\alpha/2} \frac{S}{\sqrt{n}} < \mu < \bar X + t_{\alpha/2} \frac{S}{\sqrt{n}}) = 1 - \alpha$, so we need to find $S$ then find $t_{\alpha/2}$.\\
- Finding $\bar X$ using $\sum_{i=1}^{n} X_i$, where $n = 7$:
- $$\bar X = \frac{1}{7} \cdot (620.38 + 558.49 + 657.09 + 636.86 + 680.27 + 666.82 + 662.04)$$
- \begin{center}
- \boxed{\bar X = 640.2785}
- \end{center}
- Finding $S$ using $S^2 = \frac{1}{n-1} \sum_{i=1}^{n} (X_i - \bar X)^2$ where $n = 7$:
- \begin{center}
- \boxed{S = 41.14396}
- \end{center}
- Since the confidence interval is 99\%, $\alpha/2$ will be 0.5\%, using the critical values t-distribution table to find $t_{0.5\%}$ by searching for the value where the pdf is equal to $1-0.5\% = 0.995$ and $v = n-1 = 6$ we get $t_{0.5\%} = 3.707$.
- Substituting in the formula stated above:
- $$640.2785 - 3.707 \frac{41.14396}{\sqrt{7}} < \mu < 640.2785 + 3.707 \frac{41.14396}{\sqrt{7}}$$
- \begin{center}
- \boxed{582.6311 < \mu < 697.9258}
- \end{center}
- \item Knowing that $\sigma = 35$ we can directly use the standard normal distribution curve and using the interval function $P(\bar X - Z_{\alpha/2} \frac{\sigma}{\sqrt{n}} < \mu < \bar X + Z_{\alpha/2} \frac{\sigma}{\sqrt{n}}) = 1 - \alpha$
- Since the confidence interval is 99\%, $\alpha/2$ will be 0.5\%, using the standard normal distribution table to find $Z_{0.5\%}$ by searching for the value where the pdf is equal to $1-0.5\% = 0.995$ and we get:
- \begin{center}
- \boxed{Z_{0.5\%} = 2.575}\
- (there were two values but it is reasonable to choose their mean)
- \end{center}
- Substituting in the formula stated above:
- $$640.2785 - 2.575 \frac{35}{\sqrt{7}} < \mu < 640.2785 + 2.575 \frac{35}{\sqrt{7}}$$
- \begin{center}
- \boxed{606.2144 < \mu < 674.3425}
- \end{center}
- \end{enumerate}
- \begin{thebibliography}{00}
- \bibitem{reference}
- Myers, Myers, Walpole, and Ye (2012), Probability and Statistics for Engineers and Scientists ($9^{th}$ ed).
- \end{thebibliography}
- \end{document}
Add Comment
Please, Sign In to add comment