Wednesday, June 5, 2019
Comparative Study of Advanced Classification Methods
Comparative Study of Advanced Classification MethodsCHAPTER 7 exam AND RESULTS7.0 Introduction to Softw atomic number 18 probeingSoftware riddleing is the butt against of executing a program or system with the intent of finding errors or termed as bugs or, it involves any activity aimed at evaluating an attribute or capability of programming system and determining that it meets its required issues. Software bugs leave almost always outlast in any parcel module with moderate size not because programmers are careless or irresponsible, but because the complexity of software is generally dogged and humans have only limited ability to man come along complexity. It is also true that for any complex systems, design defects can never be solely ruled out.7.2 interrogatory ProcessThe basic goal of the software development process is to produce data that has no errors or very few errors. In an effort to detect errors soon after they are introduced, each phase ends with a verificatio n activity such as review. However, most of these verification activities in the early phases of software development are based on human evaluation and cannot detect all errors. The testing process starts with a test plan. The test plan specifies all the test examples required. Then the test unit is executed with the test elusions. Reports are produced and analyzed. When testing of some unit complete, these tested units can be combined with other untested modules to form new test units. Testing of any units involves the followingPlan test casesExecute test cases andEvaluate the result of the testing7.3 Development of Test CasesA test case in software engineering is a tack of conditions or variables under which a tester will determine whether an application or software system is correctly working or not. The mechanism for determining whether a software program or system has passed or failed such a test is known as a test oracle.Test Cases follow certain format, given as followsTes t case id Every test case has an identifier uniquely associated with certain format. This id is used to track the test case in the system upon execution. Similar test case id is used in defining test script.Test case Description Every test case has a description, which describes what functionality of software to be tested.Test home Test category defines business test case category like functional tests, negative test, accessibility test usually these are associated with test case id.Expected result and the actual result These are implemented within respective API. As the testing is done for the web application, actual result will be available within the web page.Pass/fail Result of the test case is either pass or fail. Validation occurs based on evaluate and actual result. If expected and actual results are same then test case passes or else failure occurs in test cases.7.4 Testing of practise SoftwareThe various testing done on application software is as follows.Integration Test ing7.4.1 Integration TestingIn this phase of software testing item-by-item software modules are combined and tested as a group. The purpose of integration testing is to verify functional, performance and reliability requirements place on major design items. These design items, i.e. assemblages (or unit group of units), are exercised through their interfaces using black box testing, success and error cases being off-key via appropriate parameter and data foreplays. Simulated usage of shared data areas and inter process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i.e. unit testing.The overall idea is a building block approach, in which verified assemblages are added to a verified base which is then used to defy the integration te sting of further assemblages, In this approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. Integration testing is a systematic technique for constructing the program coordinate while at the same time conducting test to uncover errors associated with interfacing. The objective is to take unit-tested modules and build a program structure that has been dictated by design.The top-down approach to integration testing requires the highest-level modules be tested and integrated first. This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers. The bottom-up approach requires the lowest-level units be tested and integrated first. These units are frequently referred to as utility modules. By using this approach, utility modules are tested early in the development process and the need for stubs is minimized. The t hird approach, sometimes referred to as the umbrella approach, requires testing along functional data and control-flow paths. First, the inputs for functions are integrated in the bottom-up pattern.7.4.1.1 Test Cases for Support Vector MachineSupport Vector Machine is tested for the attributes which fall only on positive degree side of hyperplane, attributes which fall only on negative side of hyperplane, attributes which fall on both positive and negative side of hyperplane and the attributes which fall on the hyperplane. The expected results match with the actual results.Table 7.1 Test Cases for Support Vector Machine7.4.1.2 Test Cases for Naive Bayes ClassifierNaive Bayes Classifier is tested for the attributes which belongs to only severalize 1, attributes which belongs to only class -1, attributes which belongs to both class 1 and class -1. The expected results match with the actual results.Table 7.2 Test Cases for Naive Bayes Classifier7.5 Testing Results of Case StudiesA pa rticular example of something used or analyzed in order to depict a thesis or principle. It is a record study of real life situation or of an imaginary scenario.7.5.1 Problem Statement Haberman DatasetHaberman data set contains cases from the University of Chicagos Billings Hospital on the survival of patients who had undergone surgery for breast cancer. The task is to determine if the patient survived 5 years or longer (positive) or if the patient died within 5 year (negative).relation habermanattribute get on with integer 30, 83attribute Year integer 58, 69attribute Positive integer 0, 52attribute Survival positive, negativeinputs Age, Year, Positiveoutputs Survival fosterage SetTest Set weight unit vector and gammaw =0.09910.07750.2813gamma = 0.3742Predicted Class strike off of test set discombobulation matrix of the classifierTrue Positive(TP)=8.000000False Negative(FN)=27.000000False Positive(FP)=8.000000True Negative(TN)=110.000000AUC of Classifier = 0.517792Accuracy of cl assifier = 77.124183Error rate of classifier = 22.875817F_score=31.372549Precision=50.0Recall=22.857143Specificity=93.220339Confusion Matrix for SVMFig 7.1 cadence chart of SVM for various Performance mensurablePredicted Class Label of Naive Bayes ClassifierTrue Positive(TP)=10.000000False Negative(FN)=25.000000False Positive(FP)=11.000000True Negative(TN)=107.000000AUC of Classifier = 0.5202Accuracy of Classifier =76.4706Error commit of Classifier = 23.5294F_score=35.7143Precision=47.6191Recall=28.5714Specificity=90.678Confusion Matrix for NBCFig 7.2 Bar Chart of NBC for various Performance MetricTab 7.3 Comparison of SVM and NBC for various Performance Metric Fig 7.3 Bar Chart for Comparison of SVM and NBC7.5.2 Titanic Data setThe titanic dataset gives the values of four attributes. The attributes are social class (first class, second class, third class, and crew member), age (adult or child), sex, and whether or not the person survived.relation titanicattribute Class real-1.87 ,0.965attribute Age real-0.228,4.38attribute Sex real-1.92,0.521attribute Survived -1.0,1.0inputs Class, Age, Sexoutputs SurvivedTraining SetTest Setw = -0.10250.0431 -0.3983gamma = 0.3141Predicted Class label of test setconfusion matrix of the classifierTrue Positive(TP)=154.000000False Negative(FN)=181.000000False Positive(FP)=64.000000True Negative(TN)=701.000000AUC of Classifier=0.426392Accuracy of classifier in test set is=77.727273Error rate of classifier in test set is=22.272727F_score=55.696203precision=70.642202Recall=45.970149specificity=91.633987Confusion Matrix for SVMFig 7.4 Bar chart of SVM for various Performance MetricPredicted Class label of Naive Bayes ClassifierTrue Positive(TP)=197.000000False Negative(FN)=138.000000False Positive(FP)=148.000000True Negative(TN)=617.000000AUC of Classifier = 0.4782Accuracy of Classifier = 74Error Rate of Classifier = 26F_Score = 57.9412Precision = 57.1015Recall = 58.806Specificity = 80.6536Confusion Matrix for NBCFig 7.5 Bar char t of NBC for various Performance MetricTab 7.4 Comparison of SVM and NBC for various Performance MetricFig 7.6 Bar Chart for Comparison of SVM and NBCDepartment of CSE, RNSIT2014-15Page 1
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment