Edudorm Facebook

Distribution

STATISTICS

  1. Probability

Probability mainly refers to the general measure of the likelihood that a certain event will occur. In other words, it is computed as a numeral which lies between 0 and 1 in which 0 indicates the impossibility of such an event occurring and 1 the certainty of that event occurring. Therefore, when such an event occurs it means that it had the highest probability. Mathematically, it is expressed as the number of occurrences a certain targeted event takes to occur divided by the number of occurrences together with the number of failures it has (Haigh, 2003).  Logically, the truth is that a large percentage of individuals do not have the idea about what is likely to happen. Instinctively, the mathematical theory behind it mainly deals with the patterns which occur in the randomized events. In other words, the hypothesis of probability is just a random.

Applications_ the idea of probability is used in various fields, for instance, modeling and risk assessment. For instance, markets and insurance industry utilizes actuarial science for the purpose of evaluating and determining either pricing and/or trading decisions. Similarly, various governmental organizations use it for the purpose of making financial and environmental regulation, and entitled analysis (Olofsson, 2005).

  1. Distribution

Distribution mainly refers to a function or a listing which shows all the possible intervals or values of the data as well as how often they are likely to occur. Typically, when the distribution of some of the categorical information is recognized, a percentage or the number of each group is seen in individual group/s.  Conversely, in case the general distribution of numerical information is chronologically organized, it means that they are usually reorganized from the smallest to the largest i.e. broken down into reasonable sizes and then put into charts and graphs so as to examine their center, shape, and amount of variability. Normal distribution is one of the well-known distributions. With it, the information to be gathered is basically based on numerical information which is continuous. A large percentage of its possible values usually lies on the whole real number line. When systematically organized in graphical form, its shape takes a symmetric bell-shape (Laudański, 2013).

Applications_ likewise, the concept of distribution is used in describing the mathematical principles of the probability theory as well as the science of statistics. With it, there exists variability or spread in at least all values which can be measurable in any population. For example, in quantum mechanical and kinetic property of gases, distribution is the main principle used. Because of these and other multiple reasons, simple figures are usually used for the purpose of describing the quantity, but distribution is the only accurate statistical measure (Morgan & Henrion, 1992).

  1. Uncertainty

This refers to one of the statistical parameters which is associated with the outcome of measurements, which is equally characterized by the general dispersion of values which could be reasonably be attributable to that measurement. It mainly consists of several components of which some of them are evaluated from the statistical distribution of outcomes which can be differentiated through a series of measurements and experimented using standard deviations (Morgan & Henrion, 1992).

Applications_ uncertainty is used in various fields for example stock markets, gambling, water forecasting, scientific modeling, quantum mechanics, and the general verification and validation of mechanical modeling.

  1. Sampling

Sampling refers to the process which is used in statistical analysis through which a determined number of events are largely taken from the larger population. Thus, the methodology which is utilized in sampling relies on the type of analysis which is being undertaken but might include systematic or simple random sampling.  In other words, the sample used should have the capacity of representing the whole population. It is therefore essential to take into consideration the sample which is to be selected (Bart et al., 2000).

Applications- sampling assists in selecting the appropriate data points for the entire set so as to assist in estimating the general characteristics of the whole population. For example, in manufacturing, different types of sensory information i.e. current, pressure, vibration, controller, and voltage are used for the purpose of enhancing short time intervals.

  1. Statistical Inference

This statistical component refers to the act of drawing conclusions regarding the scientific or populations’ truths from the information or data being collected. There exist several methods of performing statistical inferences, for example, statistical modeling, explicit randomization, and design analysis, and information oriented strategies (Panik, 2012).

Applications _ in most cases, statistical inferences are utilized in the equi-energy sampler, temperature domain methods and so on. The main focus on this enhances efficient sampling as well as quantified statistical estimations, i.e. micro-canonical averages in the statistical mechanics. 

  1. Regression Analysis

In statistics, an outlier is regarded as being an observation point which is ultimately distinct or far-away from other observations. This implies that any outlier might be as a result of variability in their measurements or can be used for the purpose of indicating the experimental error. Therefore, outliers are at times termed as being influential observations because they usually lie from the observational distance from the other set of data in a random sample. In other words, it affects the slope of the statistical line (Constantin, 2017).

Applications_ regression analysis is used in various fields but the most prominent one is in estimating the relationship which exists between single and independent variables; for instance, the relationship between rainfall and crop yields.  

  1. Time series

Time series basically refers to the series of information or data points which have been indexed or listed in timely order. In most cases, it is a time sequence which is taken at a consecutive evenly spaced time. This is to say that it is a sequence of discrete time information or data. In this process, information always emanates from the process of tracking or monitoring industrial business metrics. With it, the general analysis mainly consists of various methods which are used for the purpose of analyzing information so as to enable extracting meaningful data as well as other characteristics of such an information (Tianqi et al., 2017). Likewise, this statistical analysis mostly employs a number of models which has the ability of foretelling future events depending on the data being accessed. Equally, an interrupted time series typically implies the general analysis of interventions on a particular time series.

Applications_ in statistics, time series is used for the purpose of obtaining an understanding of the structures and forces which generates an observed information or data.

  1. Forecasting methods

Forecasting methods in statistics refer to the act of making predictions about the future events based on the present and past experiences. With this statistical method, the general selection of something mainly relies of various factors. These factors include the significance on the historical information, the general context of the forecast, the extent of the accuracy which is being desired, and the time which is readily available to make such an analysis (Farrow et al., 2017).

Applications_ a common example of this is when estimating some variables of interest regarding certain future information. Although forecasting methods which are used are the same, all of them end up referring to the formal statistics. The main reason for that is because all of them are similar and they make use of the same statistical methods. Their usage equally differs between several areas of applications.

  1. Optimization

Optimization typically refers to the ultimate selection of the best element with respect to some criteria using a set of alternatives. In other words, it is the process of adjusting something with the intention of optimizing a certain set of parameters without infringing some constant/s. The most common of all is the minimization of costs as well as the maximization of throughput and efficiency.  As one of the industrial decision-making tool, the main objective is to maximize more than one process, while keeping other parameters constant (Lange, 2013).

 Applications _ principally, the parameters of optimization are mainly used in control optimization, operating procedures, and equipment optimization. For instance, in equipment optimization, the objective is to validate the available equipment which is being utilized to the fullest through scrutinizing operating information so as to recognize equipment bottlenecks.  Similarly, operating procedures can end up varying with time hence optimization is used for the purpose of automating the day-to-day operating capacity of the company. In the process of controlling plant optimization i.e. in chemical and oil refinery industries, the management authority had already realized that there existed several control loops. Because of that, optimization enabled each loop to be utilized for the essence of controlling operational processes (Rustagi, 1994).

  1. Decision Tree Modeling

Decision Tree Modeling refers to the model of communication or computation through which a communication or algorithm process is regarded as being a decision tree.  In other words, this is to imply that it is a sequence of various branching operations which are based on the comparison of various quantities which are assigned to computational cost. This statistical method, thus, assists in classifying a certain population into branch-like segment/s which aids in constructing an inverted tree with leaf nodes, internal nodes, and root nodes. In this method, the algorithm is typically a non-parameter which can effectively handle complicated, large datasets without inducing complicated parametric structure/s. In case the sample size used are relatively large, it means that the amount of variables that are routinely assessed or evaluated could have increased significantly with the general advancement of storing and retrieving electronic information (Moussa et al., 2006).

Applications_ in medicine, the numbers of variables which are routinely evaluated have increased considerably with the advancement of storing electronic information. Conversely, the decision tree modeling is utilized in deriving historical information. With it, it is easier to predict or determine the outcome for future records. In data manipulation, it aids in the retrieving several categorical information or skewed continuous information during medical research. During this situation, the decision tree models are used in deciding how efficient to collapse definite variable or data into manageable numbers (Bengio et al., 2010). In other words, it is one of the powerful tools which are used in statistics for the purpose of classifying, predicting, interpreting, and manipulating information which has multiple applications, particularly in medical research.

 

 

 

 

 

 

 

 

 

References

Bart, J., Notz, W. I., & Fligner, M. A. (2000). Sampling and statistical methods for behavioral ecologists. Cambridge: Univ. Press.

BENGIO, Y., DELALLEAU, O., & SIMARD, C. (2010). DECISION TREES DO NOT GENERALIZE TO NEW VARIATIONS. Computational Intelligence, 26(4), 449-467. doi:10.1111/j.1467-8640.2010.00366.x

CONSTANTIN, C. (2017). Using the Regression Model in multivariate data analysis. Bulletin Of The Transilvania University Of Brasov. Series V: Economic Sciences, 10(1), 27-34.

Farrow, D. C., Brooks, L. C., Hyun, S., Tibshirani, R. J., Burke, D. S., & Rosenfeld, R. (2017). A human judgment approach to epidemiological forecasting. Plos Computational Biology, 13(3), 1-19. doi:10.1371/journal.pcbi.1005248

Haigh, J. (2003). Taking chances: Winning with probability ; [including who wants to be a millionaire? and the weakest link]. Oxford [u.a.: Oxford Univ. Press.

Lange, K. (2013). Optimization. New York: Springer Press

Laudański, L. M. (2013). Between Certainty and Uncertainty: Statistics and Probability in Five Units with Notes on Historical Origins and Illustrative Numerical Examples. Berlin, Heidelberg: Springer.

Morgan, M. G., & Henrion, M. (1992). Uncertainty: A guide to dealing with uncertainty in quantitative risk and policy analysis. Cambridge [u.a.: Univ. Press.

Moussa, M., Ruwanpura, J., & Jergeas, G. (2006). Decision Tree Modeling Using Integrated Multilevel Stochastic Networks. Journal Of Construction Engineering & Management, 132(12), 1254-1266. doi:10.1061/(ASCE)0733-9364(2006)132:12(1254)

Olofsson, P. (2005). Probability, statistics, and stochastic processes. Hoboken, N.J: Wiley-Interscience.

Panik, M. J. (2012). Statistical inference: A short course. Hoboken, N.J: Wiley.

Rustagi, J. S. (1994). Optimization techniques in statistics. Boston: Academic Press.

Tianqi, C., Sarnat, S. E., Grundstein, A. J., Winquist, A., & Chang, H. H. (2017). Time-series Analysis of Heat Waves and Emergency Department Visits in Atlanta, 1993 to 2012. Environmental Health Perspectives, 125(5), 1-9. doi:10.1289/EHP44

 

 

 

1994 Words  7 Pages
Get in Touch

If you have any questions or suggestions, please feel free to inform us and we will gladly take care of it.

Email us at support@edudorm.com Discounts

LOGIN
Busy loading action
  Working. Please Wait...