Show simple item record

dc.contributor.advisorTyssedal, John Sølvenb_NO
dc.contributor.authorSue-Chu, Arja Margrethenb_NO
dc.date.accessioned2014-12-19T14:00:19Z
dc.date.available2014-12-19T14:00:19Z
dc.date.created2013-11-13nb_NO
dc.date.issued2013nb_NO
dc.identifier664086nb_NO
dc.identifierntnudaim:9869nb_NO
dc.identifier.urihttp://hdl.handle.net/11250/259242
dc.description.abstractIn reliability theory, it is common that data are missing due to censoring. This results in an incomplete data set which is often difficult to analyze. Methods are tested in search of the missing values, creating a fictional complete data set, with the information of when the object tested is most likely to fail.Four methods were, for this purpose, tested in this report: The quick and dirty method, the maximum likelihood method, single imputation and multiple imputation. The quick and dirty method consists of setting the censored times equal to the censored limit. The maximum likelihood estimator calculates the likelihood, taking the censoring limits into account, whereas the imputation methods imputes values for the censored, missing values. Conditional distributions are assumed appropriate as this is a logical conclusion for missing data where the failure time is not observed. Scaled truncation is used in the coding for multiple imputation, helping with the imputation, and both imputation methods were tested with the quick and dirty approach as well as the maximum likelihood approach as a starting point.The methods were implemented in the programming language R, using both own coding and embedded functions available in R. Two numerical examples are tested for all methods, calculating and comparing the gross variances of each of the methods. The gross variance calculates the expected total mean square error, where low values are considered to reveal accurate methods. The maximum likelihood estimator and multiple imputation normally perform the best, giving the lowest gross variances in most of the cases. The quick and dirty method does well for some censoring limits and poorly for others, specifically for censoring limits set far from the censored failure times, and is therefore characterized as an unreliable choice. Single imputation fails rarely, but is usually less exact than the best methods. However, it is a stable method as it consistently gives low gross variances. The accuracy of the methods dealing with censored data could settle guaranty issues that legitimize products, where the importance of reliability studies is increasing. It is also used for the credibility of these products and manufacturers.nb_NO
dc.languageengnb_NO
dc.publisherInstitutt for matematiske fagnb_NO
dc.titleMethods for Dealing with Censored Data using Experimental Designnb_NO
dc.typeMaster thesisnb_NO
dc.source.pagenumber127nb_NO
dc.contributor.departmentNorges teknisk-naturvitenskapelige universitet, Fakultet for informasjonsteknologi, matematikk og elektroteknikk, Institutt for matematiske fagnb_NO


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record