This paper reviews the use of Bayesian networks (BNs) in predicting software reliability and software defects. The approach allows analysts to incorporate causal process factors as well as combine qualitative and quantitative measures, hence overcoming some of the well-known limitations of traditional software metrics methods. The approach has been used and reported on by organizations such as Motorola, Siemens, and Philips. However, one of the impediments to more widespread use of BNs for this type of application was that, traditionally, BN tools and algorithms suffered from an obvious ‘Achilles’ heel’ – they were not able to handle continuous nodes properly, if at all. This forced modelers to have to predefine discretization intervals in advance and resulted in inaccurate predictions where the range, for example, of defect counts was large. Fortunately, recent advances in BN algorithms now make it possible to perform inference in BNs with continuous nodes, without the need to pre-specify discretization levels. Using such ‘dynamic discretization’ algorithms results in significantly improved accuracy for reliability and defects prediction type models.
The need for secure data transactions has become a necessity of our time. Medical records, financial records, legal information and payment gateway are all in need of secure data transaction process. There have been several methods proposed to perform secure, fast and scalable data transactions in web services. As the web servers deals with the huge amount of query it becomes really difficult to handle all the query to perform in the limited amount of time , and the failure of this can crash the web service or it may cause transaction failures, which can cause a huge financial losses to the organizations. Batched stream processing is a new distributed data processing paradigm that models recurring batch computations on incrementally bulk-appended data streams. The model is inspired by our empirical study on a trace from large-scale production data-processing clusters at the web server end; it allows a set of effective query optimizations that are not possible in a traditional batch processing model. By applying Bayesian Networks concept we stream the query so that similar queries are batched as cluster of queries which we called them as jumbo query. These batched queries are then commit for the transaction so that the complete process runs without any much load on the servers and they can handle heavy amount of transactions without any failures which lead to any fuss.