Advertisement
Guest User

Untitled

a guest
Jan 27th, 2019
86
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 51.77 KB | None | 0 0
  1. .)AutoIT (Excel):
  2.  
  3. #include <Excel.au3>
  4. #include <MsgBoxConstants.au3>
  5. #include <GUIConstantsEx.au3>
  6. GUICreate("Custom MsgBox", 210, 80)
  7.  
  8. GUICtrlCreateLabel("This Script will open Microsoft Excel !!!!", 10, 10)
  9. $OKID = GUICtrlCreateButton("Ok", 10, 50, 50, 20)
  10. $ExitID = GUICtrlCreateButton("Exit", 150, 50, 50, 20)
  11. GUISetState() ;display the GUICreate
  12.  
  13. Do
  14. $msg = GUIGetMsg()
  15. Select
  16. Case $msg = $OKID
  17.  
  18. Local $oExcel = _Excel_Open()
  19.  
  20. Local $oWorkbook = _Excel_BookNew($oExcel)
  21.  
  22. _Excel_RangeWrite($oWorkbook, Default, "Test", "A1")
  23.  
  24. _Excel_BookSaveAs($oWorkbook, @TempDir & "\_Excel12.xls", Default, True)
  25.  
  26. _Excel_RangeWrite($oWorkbook, Default, "2nd Test", "B1")
  27.  
  28. _Excel_BookSave($oWorkbook)
  29. MsgBox($MB_SYSTEMMODAL, "Excel UDF: _Excel_BookSave Example 1", "Workbook has been successfully saved as '" & @TempDir & "\_Excel.xls'.")
  30. _Excel_Close($oExcel)
  31. $msg = $GUI_EVENT_CLOSE or $msg = $ExitID
  32.  
  33. Case $msg = $ExitID
  34. case $msg = $GUI_EVENT_CLOSE
  35. //MsgBox(0,"You Clicked on", "Close")
  36. EndSelect
  37. Until $msg = $GUI_EVENT_CLOSE or $msg = $ExitID
  38.  
  39.  
  40. .)Selenium Webdriver:
  41. (Steps)
  42.  
  43. o.create new project-> Dynamice Web Project ->"name"-> Finish
  44. now create two webpages
  45. o.Web content->new->HTML page
  46. java file
  47. o.java resources->new->package->class
  48.  
  49. providing the jars files:
  50. o.Project name->right click->build path-> configuren build path ->add exteranal jar files -> apply and close
  51. o. right click on program->run as->java application
  52.  
  53. (Code)
  54.  
  55. >App.java
  56.  
  57. package seleniumProject.SeleniumPoject;
  58.  
  59.  
  60. import org.openqa.selenium.firefox.FirefoxDriver;
  61.  
  62. public class App {
  63. public static void main( String[] args )
  64.  
  65. {
  66. System.setProperty("webdriver.gecko.driver", "C:\\Users\\Admin3\\Desktop\\geckodriver.exe");
  67.  
  68. FirefoxDriver driver=new FirefoxDriver();
  69.  
  70. driver.get("http://localhost:8080/elenium/index.html");
  71.  
  72. String a=driver.getTitle();
  73.  
  74. System.out.println(driver.getTitle());
  75.  
  76. driver.findElementByName("username").sendKeys("Hello");
  77.  
  78. driver.findElementByName("password").sendKeys("Hi");
  79.  
  80. driver.findElementByName("submit").click();
  81.  
  82. String b=driver.getTitle();
  83.  
  84. System.out.println(driver.getTitle());
  85.  
  86.  
  87.  
  88. if(a.equalsIgnoreCase(b)) {
  89. System.out.println("Title Match");
  90. }else {
  91. System.out.println("Title not matched");
  92. }
  93.  
  94. driver.quit();
  95.  
  96. }
  97. }
  98.  
  99.  
  100.  
  101. >Index.html
  102.  
  103.  
  104. <!DOCTYPE html>
  105. <html>
  106. <head>
  107. <meta charset="ISO-8859-1">
  108. <title>First page</title>
  109. </head>
  110. <body>
  111. <form action="secondpage.html">
  112. UserName:<input type="text" name="username">
  113. Password:<input type="password" name="password">
  114. <input type="submit" name="submit">
  115. </form>
  116. </body>
  117. </html>
  118.  
  119.  
  120. >Secondage.html
  121.  
  122.  
  123. <!DOCTYPE html>
  124. <html>
  125. <head>
  126. <meta charset="ISO-8859-1">
  127. <title>First page</title>
  128. </head>
  129. <body>
  130. </form>
  131. </body>
  132. </html>
  133.  
  134. >Practical No. 1
  135.  
  136. AIM : EVALUATING TEST EXIT CRITERIA AND REPORTING
  137. PROBLEM STATEMENT :
  138.  
  139. 1. What are the various properties of test execution process to measure completeness of testing and reporting?
  140. 2. What type of worksheet can be used to track the number of important properties of test execution?
  141. 3. Write a test plan in IEEE 829 format.
  142.  
  143. SOLUTION :
  144.  
  145. Answer 1 :
  146.  
  147. To measure completeness of the testing with respect to exit criteria, and to generate information needed for reporting, we can measure properties of the test execution
  148.  
  149. process such as the following:
  150. •Number of test conditions, cases, or test procedures planned, executed, passed, and failed
  151. •Total defects, classified by severity, priority, status, or some other factor
  152. •Change requests proposed, accepted, and tested
  153. •Planned versus actual costs, schedule, effort
  154. •Quality risks, both mitigated and residual
  155. •Lost test time due to blocking events
  156. •Confirmation and regression test results
  157.  
  158. Answer 2 :
  159.  
  160. >Practical No. 02
  161.  
  162. AIM : STATIC AND DYNAMIC ANALYSIS
  163.  
  164. EQUIVALENCE PARTITIONING EXERCISE
  165.  
  166. A screen prototype for one screen of the HELLOCARMS system is shown in figure 4-7. This screen asks for three pieces of information :
  167. •The product being applied for, which is one of the following :
  168. – Home equity loan
  169. – Home equity line of credit
  170. – Reverse mortgage
  171. •Whether someone has an existing Globo bank checking account, which is either Yes or No
  172. •Whether someone has an existing Globo bank savings account, which is either Yes or No
  173.  
  174. If the user indicates an existing Globo bank account, then the user must enter the corresponding account number. This number is validated
  175. against the bank’s central database upon entry. If the user indicates no such account, the user must leave the corresponding account number
  176. field blank. If the fields are valid, including the account number fields, then the screen will be accepted. If one or more fields are invalid,
  177. an error message is displayed.
  178.  
  179.  
  180. The exercise consists of two parts :
  181.  
  182. 1.Show the equivalence partitions for each of the three pieces of information, indicating valid and invalid members.
  183. 2.Create test cases to cover these partitions, keeping in mind the rules about combinations of valid and invalid members.
  184.  
  185.  
  186.  
  187.  
  188. EQUIVALENCE PARTITIONING EXERCISE DEBRIEF
  189.  
  190. First, let’s take a look at the equivalence partitions. For the application-product field, the equivalence partitions are as follows :
  191.  
  192. For each of two existing-account entries, the situation is best modeled as a single input field, which consists of two subfields. The first
  193. subfield is the Yes/ No field. This subfield determines the rule for checking the second subfield, which is the account number. If the first
  194. subfield is Yes, the second subfield must be a valid account number. If the first subfield is No, the second subfield must be blank. So, the
  195. existing checking account information partitions are as follows and, the existing savings account information partitions are as follows:
  196.  
  197. Now, let’s create tests from these equivalence partitions. As we do so, we’re going to capture traceability information from the test case number
  198. back to the partitions. Once we have a trace from each partition to a test case, we’re done— provided that we’re careful to follow the rules
  199. about combining valid and invalid partitions!
  200.  
  201.  
  202.  
  203. BOUNDARY VALUE ANALYSIS :
  204.  
  205. A black-box test design technique in which test cases are designed based on boundary values.
  206.  
  207.  
  208. The above screen asks for two pieces of information:
  209. –Loan amount
  210. –Property value
  211.  
  212. For both fields, the system allows entry of whole dollar amounts only (no cents), and it rounds to the nearest $100. Assume the following rules apply to loans :
  213. –The minimum loan amount is $5,000.
  214. –The maximum loan amount is $1,000,000.
  215. –The minimum property value is $25,000.
  216. –The maximum property value is $5,000,000.
  217.  
  218. If the fields are valid, then the screen will be accepted. If one or both fields are invalid, an error message is displayed. The exercise consists of two parts :
  219.  
  220. 1.Show the equivalence partitions and boundary values for each of the two fields, indicating valid and invalid members and the boundaries for those partitions.
  221. 2.Create test cases to cover these partitions and boundary values, keeping in mind the rules about combinations of valid and invalid members.
  222.  
  223.  
  224.  
  225.  
  226. BOUNDARY VALUE EXERCISE DEBRIEF
  227.  
  228. First, let’s take a look at the equivalence partitions and boundary values, which are shown in figure 4-15.
  229.  
  230.  
  231.  
  232. So, for the loan amount we can show the boundary values and equivalence partitions as shown in 4-8.
  233.  
  234.  
  235.  
  236.  
  237. Make sure you understand why these values are boundaries based on round-off rules given in the requirements. For the transfer decision, we
  238. can show the equivalence partitions for the loan amount as shown in table 4-10.
  239.  
  240.  
  241.  
  242. Now, let’s create tests from these equivalence partitions and boundary values. We’ll capture traceability information from the test case
  243. number back to the partitions or boundary values, and as before, once we have a trace from each partition to a test case, we’re done—as
  244. long as we didn’t combine invalid values !
  245.  
  246.  
  247.  
  248.  
  249.  
  250.  
  251.  
  252.  
  253.  
  254.  
  255. DECISION TABLE :
  256.  
  257. A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
  258.  
  259.  
  260.  
  261.  
  262.  
  263.  
  264.  
  265. DECISION TABLE TESTING :
  266.  
  267. A black-box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) has shown in a decision table.
  268. During development, the HELLOCARMS project team added a feature to HELLOCARMS.
  269. This feature allows the system to sell a life insurance policy to cover the amount of a home equity loan so that, should the borrower die,
  270. the policy will pay off the loan. The premium is calculated annually, at the beginning of each annual policy period and based on the loan
  271. balance at that time. The base annual premium will be $1 for $10,000 in loan balance. The insurance policy is not available for lines of credit
  272. or for reverse mortgages. The system will increase the base premium by a certain percentage based on some basic physical and health questions
  273. that the Telephone Banker will ask during the interview.
  274.  
  275. A Yes answer to any of the following questions will trigger a 50% increase to the base premium :
  276. 1.Have you smoked cigarettes in the past 12 months?
  277. 2.Have you ever been diagnosed with cancer, diabetes, high cholesterol, high blood pressure, a heart disorder, or stroke?
  278. 3.Within the last 5 years, have you been hospitalized for more than 72 hours except for childbirth or broken bones?
  279. 4.Within the last 5 years, have you been completely disabled from work for a week or longer due to a single illness or injury?
  280.  
  281. The Telephone Banker will also ask about age, weight, and height. The weight and height are combined to calculate the body mass index (BMI).
  282. Based on that information, the Telephone Banker will apply the rules in table 4-18 to decide whether to increase the rate or even decline to
  283. issue the policy based on possible weight-related illnesses in the person’s future.
  284.  
  285.  
  286.  
  287. The increases are cumulative. For example, if the person has normal weight, smokes cigarettes, and has high blood pressure, the annual rate is
  288. increased from $1 per $10,000 to $2.255 per $10,000. If the person is a 45-year-old male diabetic with a body mass index of 39, the annual rate
  289. is increased from $1 per $10,000 to $2.6256 per $10,000.
  290.  
  291. The exercise consists of three steps :
  292. 1.Create a decision table that shows the effect of the four health questions and the body mass index.
  293. 2.Show the boundary values for body mass index and age.
  294. 3.Create test cases to cover the decision table and the boundary values, keeping in mind the rules about testing nonexclusive rules.
  295.  
  296.  
  297.  
  298.  
  299.  
  300.  
  301. DECISION TABLE EXERCISE DEBRIEF
  302.  
  303. First, we created the decision table from the four health questions and the BMI/ age table. The answer is shown in table 4-19. Note that the increases
  304. are shown in percentages.
  305.  
  306.  
  307. It’s important to notice that rules 1 through 4 are nonexclusive, though rules 5 through 12 are exclusive. In addition, there is an implicit
  308. rule that the age must be greater than 17 or the applicant will be denied not only insurance but the loan itself. We could have put that here
  309. in the decision table, but our focus is primarily on testing business functionality, not input validation. We’ll cover those tests with boundary values.
  310. Now, let’s look at the boundary values for body mass index and age, shown in figure 4-18.
  311.  
  312.  
  313.  
  314. For the BMI, we can show the boundary values and equivalence partitions as shown in table 4-20.
  315.  
  316.  
  317. For the age, we can show the boundary values and equivalence partitions as shown in table 4-21.
  318.  
  319.  
  320.  
  321. Finally, table 4-22 shows the test cases. They are much like the decision table, but note that we have shown the rate (in dollars per
  322. $10,000 of loan balance) rather than the percentage increase.
  323.  
  324.  
  325.  
  326.  
  327.  
  328. Notice our approach to testing the nonexclusive rules. First, we tested every rule, exclusive and nonexclusive, in isolation. Then, we tested
  329. the remaining untested boundary values. Next, we tested combinations of only one nonexclusive rule with one exclusive rule, making sure each
  330. nonexclusive rule had been tested once in combination (but not all the exclusive rules were tested in combination).
  331.  
  332. Finally, we tested a combination of all four nonexclusive rules with one exclusive rule. We did not use combinations with the “decline”
  333. rules since presumably there’s no way to check if the increase was correctly calculated. For the decision table and the boundary values, we’ve
  334. captured test coverage in the following tables to make sure we missed nothing.
  335.  
  336. Table 4-23, table 4-24, and table 4-25 show decision table coverage using three coverage metrics.
  337.  
  338.  
  339.  
  340. STATE TESTING EXERCISE
  341.  
  342. This exercise consists of three parts:
  343. 1. Using the following semiformal use case, translate it into a state transition diagram, shown from the point of view of the Telephone Banker.
  344. 2. Generate test cases to cover the states and transitions (0-switch).
  345. 3. Generate a switch table to the 1-switch level.
  346.  
  347.  
  348.  
  349.  
  350.  
  351.  
  352. STATE TESTING EXERCISE DEBRIEF
  353.  
  354. 1. Create state transition diagram.
  355. Figure 4-27 shows the state transition diagram we generated based on the preceding semiformal use case.
  356.  
  357.  
  358.  
  359.  
  360.  
  361. 2. Generate test cases to cover the states and transitions (0-switch).
  362. Let’s adopt a rule that says that any test must start in the initial waiting state and may only end in the waiting state or the shift over state. To achieve
  363. state and transition coverage, the following tests will suffice :
  364.  
  365. 1.(waiting, phone call, loan screens, exceed[value], escalate, approved, resume, system loan offer, offer screen, offer insurance, insurance screens,
  366. cust ins accept, add to package, cust loan accept, send to LoDoPS, waiting)
  367. 2.(waiting, phone call, loan screens, exceed[loan], escalate, approved, resume, system loan offer, offer screen, offer insurance, insurance screens,
  368. cust ins reject, archive, cust loan accept, send to LoDoPS, waiting)
  369. 3.(waiting, phone call, loan screens, system loan offer, offer screen, offer insurance, insurance screens, system ins reject, archive,
  370. cust loan reject, archive, waiting)
  371. 4.(waiting, phone call, loan screens, exceed[loan], escalate, system loan decline, archive, waiting)
  372. 5.(waiting, phone call, loan screens, system loan decline, archive, waiting)
  373. 6.(waiting, phone call, loan screens, cust cancel, archive, waiting)
  374. 7.(waiting, phone call, loan screens, exceed[loan], escalate, cust cancel, archive, waiting)
  375. 8.(waiting, phone call, loan screens, system loan offer, offer screen, cust cancel, archive, waiting)
  376. 9.(waiting, phone call, loan screens, system loan offer, offer screen, offer insurance, insurance screens, cust cancel, archive, waiting)
  377. 10.(waiting, end of shift, log out, shift over)
  378.  
  379. Notice that we didn’t do an explicit boundary value or equivalence partitioning testing of, say, the loan amount or the property value, though we certainly could have.
  380.  
  381. Also, note that this is an example of when our rule of thumb for number of needed test cases did not work. This usually happens because there are multiple paths
  382.  
  383. between two non-final states (in this case between offering and gathering insurance info) that must be tested with separate test cases.
  384.  
  385. 3. Generate a switch table to the 1-switch.
  386. First, redraw the state transition diagram of figure 4-27 to make it easier to work with. It should look like figure 4-28.
  387.  
  388.  
  389. From the diagram, we can generate the 1-switch table shown in table 4-27. Notice that we have used patterns in the diagram to generate the table.
  390. For example, the maximum number of outbound transitions for any state in the diagram is four, so we use four columns on both the 0-switch and 1-switch columns.
  391. We started with six 1-switch rows per 0-switch row because there are six states, though we were able to delete most of those rows as we went along.
  392. This leads to a sparse table, but who cares as long as it makes generating this beast easier.
  393.  
  394.  
  395.  
  396. >Practical No. 03
  397.  
  398. AIM : RATE QUALITY ATTRIBUTES FOR DOMAIN AND TECHNICAL TESTING
  399.  
  400. Technical test analysts, according to the ISTQB Advanced syllabus, should be able to identify opportunities to use test techniques that are appropriate for testing:
  401. –Accuracy
  402. –Suitability
  403. –Interoperability
  404. –Usability
  405. –Security
  406.  
  407. USABILITY TEST EXERCISE
  408.  
  409. Review the HELLOCARMS system requirements document, specifically the usability section. Analyze the risks and create an informal test design for usability
  410. testing. The following section contains our solution. Of course, your solution may differ based on your experience with usability testing
  411.  
  412. USABILITY TEST EXERCISE DEBRIEF
  413.  
  414. There are several interesting requirements for HELLOCARMS under usability. We selected 030-020-020 under the learn ability attribute and 030-010- 020
  415. under understandability. Non-functional testing often forces us to be more creative in coming up with test designs than functional testing. It is not
  416. always simply coming up with input data, expected output data and behaviors, etc. In line with recommendations from ISO 9126, much of our non-functional
  417. testing will be static testing or trying to measure metrics after a project has occurred.
  418.  
  419. STARTING WITH 030-020-020 :
  420.  
  421. HELLOCARMS will include a self-contained training wizard for all users. This wizard will lead a new user through all of the screens using canned data.
  422. The training will be sufficient that an average user will become proficient in the use of HELLOCARMS within 8 hours of training.
  423.  
  424. Testing for this requirement would be straightforward static testing at first. It would consist of working through the wizard, one screen at a time, and
  425. comparing the information presented the users to the requirements and designs actually used. We would check for completeness, correctness, and order of presentation.
  426. Once the system was delivered into beta testing, we would send out questionnaires to Telephone Bankers and our partners asking for information on their
  427. first week of using the system. Specifically, we would target any errors, misunderstandings, or inefficiencies they run into, asking for feedback on
  428. upgrades or changes they might like to see in the wizard.
  429.  
  430. Before each modification project for HELLOCARMS, we would scrutinize all reported defect records looking for evidence of mistakes made by ignorance of
  431. the system and make sure our documentation covers those areas correctly.
  432.  
  433. While going through the wizard, we would keep in mind the understandability requirement, 030-010-020 :
  434.  
  435. All screens, instructions, help, and error messages shall be understandable at an eighth grade level.
  436. We would make sure that little or no difficult domain-specific jargon was thrown in to confuse a Telephone Banker or partner.
  437.  
  438. Looking at the understandability requirement, there are several different ways to try to determine the grade level required to understand a document.
  439.  
  440. One of the most common ways is the Flesch-Kincaid grade level readability formula (which just so happens to be built into MS Word). According to the
  441. Word help file, the formula is as follows :
  442. FKRA = (0.39 x ASL) + (11.8 x ASW) - 15.59
  443. In this formula, FKRA = Flesch-Kincaid reading age, ASL = Average sentence length, and ASW = average number of syllables per word.
  444.  
  445. MAINTAINABILITY AND PORTABILITY EXERCISE
  446.  
  447. Using the HELLOCARMS system requirements document, analyze the risks and create an informal test design for each of the following using one requirement for each:
  448. • Maintainability
  449. • Portability
  450.  
  451.  
  452. MAINTAINABILITY AND PORTABILITY EXERCISE DEBRIEF
  453.  
  454. Maintainability is an interesting quality characteristic for testers to deal with. Most maintainability issues are not amenable to our normal concept of
  455. a dynamic test, with input data, expected output data, etc. Certainly some maintainability testing is done that way, when dealing with patches, updates,
  456. and so forth.
  457.  
  458. For this exercise, we are going to select requirement 050-010-010 : Standards and guidelines will be developed and used for all code and other generated
  459. materials used in this project to enhance maintainability.
  460.  
  461. Our first effort, therefore, done as early as possible, would be to review the programming standards and guidelines with the rest of the test team and
  462. the development group. Assuming, of course, that we have standards and guidelines. If there were none defined, we would try to get a cross-functional
  463. team busy defining them.
  464.  
  465. The majority of our effort would be during static testing. Starting (specifically for this requirement) at the low-level design phase, we would want to
  466. attend reviews, walk-throughs, and inspections. We would use available checklists, including Marick’s, Laszlo’s24 and our own internal checklists based
  467. on defects found previously.
  468.  
  469. Throughout each review, we would be asking the same questions: Are we actually adhering to the standards and guidelines we have? Are we building a system
  470. that we will be able to troubleshoot when failures occur? Are we building a system with low coupling and high cohesion? Is it modular? How much effort will
  471. it take to test?
  472.  
  473. Since these standards and guidelines are not optional, we would work with the developers to make sure they understood them, and then we would start
  474. processing exceptions to them through the defect tracking database as we would any other issues.
  475.  
  476. Beyond the standards and guidelines, there would still be some dynamic testing of changes made to the system, specifically for regression after patches and other
  477. modifications. We would want to mine the defect tracking database and the support database to try to learn where regression bugs have occurred. New testing
  478. would be predicated on those findings, especially if we found hot spots where failures occurred with regularity.
  479.  
  480. Many of our metrics would have to come from analyzing other metrics. How hard was it to figure out what was wrong (analyzability)? When changes are needed,
  481. how much effort and time does it take to make them (changeability)? How many regression bugs are found (in test and in the field) after changes are made
  482. (stability)? And, how much effort has it taken for testers to be able to test the system (testability)?
  483.  
  484. Portability testing consists of adaptability, installability, coexistence, and replaceability subattributes. Because HELLOCARMS is surfaced on browsers,
  485. we find the compelling attribute to be adaptability. Therefore, we have selected requirement 060-010-030 for discussion.
  486.  
  487.  
  488. >Practical No. 04
  489. AIM : PERFORM REVIEW
  490.  
  491. Types of Reviews :
  492.  
  493. 1.At the lowest level of formality (and, usually, defect removal effectiveness), we find the informal review. This can be as simple as two people, the author
  494. and a colleague, discussing a design document over the phone.
  495. 2.Technical reviews are more formalized, but still not highly formal.
  496. 3.Walk-throughs are reviews where the author is the moderator and the item under review itself is the agenda. That is, the author leads the review, and
  497. in the review, the reviewers go section by section through the item under review.
  498. 4.Inspections are the most formalized reviews. The roles are well defined. Managers may not attend. The author may be neither moderator nor secretary.
  499. A reader is typically involved.
  500.  
  501. What are Informal Reviews?
  502.  
  503. Informal reviews are applied many times during the early stages of the life cycle of the document. A two person team can conduct an informal review. In later
  504. stages these reviews often involve more people and a meeting. The goal is to keep the author and to improve the quality of the document. The most important
  505. thing to keep in mind about the informal reviews is that they are not documented.
  506.  
  507. What are formal Reviews?
  508.  
  509. Formal reviews follow a formal process. It is well structured and regulated. A formal review process consists of six main steps:
  510. 1. Planning
  511. 2. Kick-off
  512. 3. Preparation
  513. 4. Review meeting
  514. 5. Rework
  515. 6. Follow-up
  516.  
  517. CODE REVIEW EXERCISE
  518.  
  519. In this exercise, you apply Marick’s and the Open Laszlo code review checklists to the following code shown following the instructions.
  520. 1.Prepare: Review the code, using Marick’s questions, Laszlo’s checklist, and any other C knowledge you have. Consider maintainability issues as you review the
  521. code. Document the issues you find.
  522. 2.Hold a review meeting: If you are using this book to support a class, work in a small group to perform a walk-through, creating a single list of problems.
  523. 3.Discuss: After the walk-through, discuss your findings with other groups and the instructor.
  524.  
  525. The solution to the first part is shown in the next section.
  526.  
  527. Here is the code that you are reviewing. This code performs a task by getting values from the user, performing a calculation, and then printing out the result.
  528. On subsequent pages, we will present a debrief for this exercise.
  529.  
  530.  
  531. 1. getInputs(float *, float *, float *);
  532. 2. floatdoCalcs(float, float, float);
  533. 3. ShowIt(float);
  534. 4. main(){
  535. 5. float base, *power;
  536. 6. floatScaler;
  537. 7. getInputs(&base, power, &Scaler);
  538. 8. ShowIt(doCalculations(base, *power));
  539. 9. }
  540. 10. voidgetInputs(float *base, float power, float *S){
  541. 11. float base, power;
  542. 12. float i;
  543. 13. printf("\nInput the radix for calculation => ");
  544. 14. scanf("%f", *base);
  545. 15. printf("\nInput power => ");
  546. 16. scanf("%f", *power);
  547. 17. printf ("/nScale value => ")
  548. 18. scanf("i", i);
  549. 19. *Base = &base;
  550. 20. *P = &power; }
  551. 21. floatdoCalcs(float base, float power, float Scale){
  552. 22. float total;
  553. 23. if (Scale != 1) total == pow(base, power) * Scale;
  554. 24. else total == pow(base, power);
  555. 25. return;}
  556. 26. voidShowIt(float Val){
  557. 27. printf("The scaled power value is %f W.\n", Val);
  558. 28. }
  559.  
  560. CODE REVIEW EXERCISE DEBRIEF
  561.  
  562. This code is representative of code that Jamie frequently worked with when he was doing maintenance programming. Rex will let Jamie describe his findings here.
  563.  
  564. Let’s start with some general maintainability issues with this code :
  565. 1.No comments
  566. 2.No function headers. I have a standard that says that every callable function gets a formal header, explaining what it does, the arguments it takes,
  567. there turn value, and what the value means. I also include change flags and dates, with explanation for each change.
  568. 3.No reasonable naming conventions are followed. I would prefer Hungarian notation so we can discern the data type automatically.
  569. 4.No particular spacing standards used, so code is not as readable as it might be.
  570.  
  571. Based on Marick’s checklist and a general knowledge of C programming weaknesses and features, here are some specific issues with this code :
  572.  
  573. ■ Line 0: Not shown: We need the includes for the library functions we are calling. We would need stdio.h (for printf() and scanf()) and math.h (forpow()).
  574. These problems would actually prevent the program from compiling, which should be a requirement before having a code review.
  575. ■ Line 1: Every function should have a return value, in this case void.
  576. ■ Line 2: No issues.
  577. ■ Line 3: Once again, the function should have a return value.
  578. ■ Line 4. This might work in some compilers, but the main should return avalue (int), and if it takes no explicit arguments, it should have void. This isa violation
  579. of Marick’s miscellaneous question, Does the program have a specific exit value?
  580. ■ Line 5: The variable power is defined as a pointer to float, but no storage is allocated for it. Near as I can tell, there is no reason to declare it as a pointer,
  581. and it should simply be a local float declared. Note that these variables are passed in to a function call before being initialized. This could be seen as a
  582. violation of Marick’s declaration question, Are all variables always initialized? Since no data has been allocated, this is a violation of Marick’s allocation
  583. question, Is too little (or too much) data being allocated? And, just to make it interesting, assuming that the code was run this way, it would-be possible to
  584. try to dereference the pointer *power, which breaks Marick’s pointer question, Can a pointer ever be dereferences when NULL?
  585. ■ Line 6: Variable is passed in to a function call before being initialized. This is a violation of Marick’s declaration question, Are all variables always
  586. initialized?
  587. ■ Line 7: The function call arguments are technically correct since the variable power was defined as a pointer. However, the way it is written,
  588. it will blow up since there is no storage allocated. This is a violation of Marick’s allocation question, Is too little (or too much) data being allocated?
  589. Since I would change power to a float in line 5, this argument would have to be passed in as &power just like the other arguments.
  590. ■ Line 8: Same issue with power; it should be passed by value as just power.
  591.  
  592. Also, the function doCalculations() does not exist. It should be doCalcs().
  593. And, if they are meant to be the same function, the argument count is incorrect.
  594. ■ Line 9: No issue.
  595. ■ Line 10: S is not a good name for a variable.
  596. ■ Line 11: The local variables have exactly the same name as the formal parameters passed in. I would like to think that this naming would prevent
  597. the module from compiling; I fear it won’t. It certainly will be confusing. If we must name the local variables the same as the parameters
  598. (considering the way they are used, it makes a little sense), then we should change the capitalization to make them explicitly different. I would
  599. capitalize the local variables Base and Power.
  600. ■ Line 12: While this is legal, it is a bad naming technique. The variable i, when used, almost always stands for an integer; here it is a float.
  601. At the very least it is confusing. This should likely match the others and be renamed Scaler.
  602. ■ Line 13: No issue although the prompt message is weak.
  603.  
  604. ■ Line 14: No issue.
  605. ■ Line 15: No issue.
  606. ■ Line 16: No issue.
  607. ■ Line 17: The line feed is backwards: should be \n and not /n.
  608. ■ Line 18: We should be loading the value of S with this scanf() function.
  609. There is no need for the local variable i.
  610. ■ Line 19: Let me say that I hate pointer notation with a passion. Here, we are assigning a pointer to the value pointed to by *Base. What we really want
  611. to do is assign the actual value; the statement should read *base = Base (assuming we made the change in Line 11 to its name).
  612. ■ Line 20: Same as Line 19, and I still really hate pointer notation. Also, we are not returning any value to the third argument of the getInputs()
  613. function. There should be a statement that goes *Scaler = scaler (assuming we change the name of the variable as suggested in line 12). *P is never declared;
  614. it is also a really poor name for a variable. Finally, the closing curly brace should not be on this line but moved down to the following line.
  615. That is the same indentation convention that we use for the other curly braces.
  616. ■ Line 21: No issue.
  617. ■ Line 22: No issue.
  618. ■ Line 23: We are doing an explicit equivalence check on a float [if (Scale != 1)].
  619. This is a violation of Marick’s computation question, Are exact equality tests used on floating point numbers? On some architectures,
  620. I would worry about whether the float representation of 1 is actually going to be equal to one. The problem is that I really don’t know what
  621. this scalar is supposed to do. It looks like, the way the code is written, Scale is only there to save a multiplication if it is equal to one.
  622. I would want to know if the user can scale at 5.3 (or any other real number) or if they could use only integers. If they could input only integers,
  623. I would change the data type to in everywhere it is used. If there is a valid reason to input a real number (i.e., one with a decimal), then I would
  624. lose the if statement and simply do the multiplication each time. Comments would help me understand the logic being used. The wrong operator has been used;
  625. it should be an assignment statement (=) rather than a Boolean compare (==).
  626. ■ Line 24: Incorrect operator; need a single equal sign.
  627.  
  628. ■ Line 25: The calculation is being lost because we are returning nothing. It should return the local variable value, total.
  629. ■ Line 26: No issue.
  630. ■ Line 27: No issue.
  631.  
  632. >Practical No. 05
  633.  
  634. AIM : INCIDENT MANAGEMENT
  635.  
  636. Defect (or bug or fault or problem): A flaw in a component or system that can cause the component or system to fail to perform its required function,
  637. e.g., an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
  638.  
  639. Incident (or deviation): Any event occurring that requires investigation.
  640.  
  641. INCIDENT MANAGEMENT EXERCISE
  642.  
  643. Assume that a select group of Telephone Bankers will participate in HELLOCARMS testing as a beta test. The bankers will enter live applications from customers,
  644. but they will also capture the information and enter it into the current system afterward to ensure that no HELLOCARMS defects affect the customers. The bankers are
  645.  
  646. not trained testers and are unwilling to spend time to learn testing fundamentals. So, to avoid having the bankers enter poorly written incident reports and introduce
  647.  
  648. noise into the incident report metrics, management has decided that, when a banker finds a problem, he or she will send an e-mail to a test analyst to enter the
  649.  
  650. report.
  651.  
  652. You receive the following e-mail from a banker describing a problem :
  653.  
  654. I was entering a home equity loan application for a customer with good credit. She owns a high-value house, though the loan amount is not very large.
  655. At the proper screen, HELLOCARMS popped up the “escalate to Senior Telephone Banker” message. However, I clicked continue and it allowed me to proceed,
  656. even though no Senior Telephone Bank Authorization Code had been entered.
  657. From that point forward in this customer’s application, everything behaved normally.
  658. I had another customer with a similar application—high-value house, medium-sized loan amount—call in later that day. Again, it would let me
  659. proceed without entering the authorization code.
  660.  
  661. The exercise consists of two parts :
  662.  
  663. 1.What IEEE 1044 recognition and recognition impact classification fields and data are available from this e-mail?
  664. 2.What steps would you take to clarify this report?
  665. 3.Incident Management Exercise Debrief
  666. 4.Rex did the solution to this exercise, so he’ll describe it here.
  667. 5.First, I evaluated each of the pertinent recognition and recognition impact classifications and data fields to see if this
  668. e-mail or other information I assume I have is presented. My analysis is shown in table 7-2.
  669.  
  670. Next, I have annotated the report with some steps I’d take to clarify it before putting it into the system. The original information
  671. is shown in italic, while my text is shown in regular font. I was entering a home equity loan application for a customer with good credit.
  672. I would want to find out her exact data, including income, debts, assets, credit score, etc.
  673.  
  674.  
  675.  
  676.  
  677. She owns a high-value house, though the loan amount is not very large. I would want to find out the exact value of the house and the loan amount.
  678. I would test various combinations of values and loan amounts to see if I could find a pattern. At the proper screen, HELLOCARMS popped up the
  679. “escalate to Senior Telephone Banker” message. However, I clicked continue and it allowed me to proceed, even though no Senior Telephone Bank
  680. Authorization Code had been entered. I would want to find out if the banker entered anything at all into that field. I would test leaving it empty,
  681. input blanks, input valid characters that were not valid authorization codes, and conduct some other checks to see whether it is ignoring the field completely.
  682. From that point forward in this customer’s application, everything behaved normally. I would test to see whether such applications are transferred to
  683. LoDoPS or are silently discarded. If they are transferred to LoDoPS, does LoDoPS proceed or does it catch the fact that this step was missed?
  684. I had another customer with a similar application—high-value house, medium sized loan amount—call in later that day. Again, it would let me proceed
  685. without entering the authorization code. Here also I would want to find out the exact details on this applicant, the property value, and the loan amount.
  686. >Practical No. 06
  687.  
  688. AIM : PATH TESTING AND EQUIVALENCE PARTITIONING
  689.  
  690. PATH TESTING
  691.  
  692. We have looked at many different control-flow coverage schemes. But there are other ways to design test cases
  693. using the structure as a test basis. In this section, we will look at path testing.
  694.  
  695. Remember that we have covered three main different ways of approaching test design so far :
  696.  
  697. 1.Statement testing, where the statements themselves drove the coverage.
  698. 2.Decision testing, where the branches drove the coverage.
  699. 3.Condition, decision/condition, modified condition/decision coverage, and multiple condition coverage,
  700. all of which looked at sub-expressions and atomic conditions of a particular decision
  701.  
  702. CYCLOMATIC COMPLEXITY EXERCISE
  703.  
  704. The following C code function loads an array with random values.
  705. 1. int main (int MaxCols, int Iterations, int MaxCount)
  706. 2. {
  707. 3. int count = 0, totals[MaxCols], val = 0;
  708. 4.
  709. 5. memset (totals, 0, MaxCols * sizeof(int));
  710. 6.
  711. 7. count = 0;
  712. 8. if (MaxCount > Iterations)
  713. 9. {
  714. 10. while (count < Iterations)
  715. 11. {
  716. 12. val = abs(rand()) % MaxCols;
  717. 13. totals[val] += 1;
  718. 14. if (totals[val] > MaxCount)
  719. 15. {
  720. 16. totals[val] = MaxCount;
  721. 17. }
  722. 18. count++;
  723. 19. }
  724. 20. }
  725. 21. return (0);
  726. 22. }
  727.  
  728.  
  729.  
  730. 1.Create a directed control-flow graph for this code.
  731. 2.Using any of the methods given in the preceding section, calculate the cyclomatic complexity.
  732. 3.List the basis tests that could be run.
  733.  
  734.  
  735. CYCLOMATIC COMPLEXITY EXERCISE DEBRIEF
  736.  
  737. The directed control-flow graph should look like figure 4-45. Note that the edges D1 and D2 are labeled;
  738. D1 is where the if conditional in line 14 evaluates to TRUE, D2 is where it evaluates to FALSE. We could use
  739. three different ways to calculate the cyclomatic complexity of the code as shown in the box on the right in figure.
  740.  
  741.  
  742. First, we could calculate the number of test cases by the region methods.
  743.  
  744.  
  745. Remember, a region is an enclosed space. The first region can be seen on the left side of the image
  746. . The curved line goes from B to E; it is enclosed by the nodes (and edges between them) B-C-E. The second region
  747. is the top edge that goes from C-D and is enclosed by line D1. The third is the region with the same top edge, C-D,
  748. and is enclosed by D2.
  749.  
  750.  
  751.  
  752.  
  753. Here is the formula:
  754. C = # Regions + 1
  755. C = 3 + 1
  756. C = 4
  757.  
  758. The second way to calculate cyclomatic complexity uses McCabe’s cyclomatic complexity formula. Remember, we count up the edges
  759. (lines between bubbles) and the nodes (the bubbles themselves) as follows:
  760. C = E - N + 2
  761. C = 7 - 5 + 2
  762. C = 4
  763.  
  764. Finally, we could use our rule of thumb measure, which usually seems to work. Count the number of places where decisions are made
  765. and add 1. So, in the code itself, we have line 8 (an if() statement), line 10 (a while() loop), and line 14 (an if() statement):
  766. C = # decisions + 1
  767. C = 3 + 1
  768. C = 4
  769.  
  770. In each case, our basis path is equal to 4. That means our basis set of tests would also number 4. The following test cases would cover the basis paths:
  771. 1.ABE
  772. 2.ABCE
  773. 3.ABCD(D1)CE
  774. 4.ABCD(D2)CE
  775.  
  776. EQUIVALENCE PARTITIONING
  777.  
  778. Problem Statement :
  779. A screen prototype for one screen of the HELLOCARMS system is shown in the figure below.
  780. This screen asks for three pieces of information :
  781.  
  782. a)The product being applied for, which is one of the following:
  783. 1.Home equity loan
  784. 2.Home equity line of credit
  785. 3.Reverse mortgage
  786.  
  787. b)Whether someone has an existing Globobank checking account, which is either Yes or No
  788.  
  789. c)Whether someone has an existing Globobank savings account, which is either Yes or No
  790. •If the user indicates an existing Globobank account, then the user must enter the corresponding account number.
  791. •This number is validated against the bank’s central database upon entry.
  792. •If the user indicates no such account, the user must leave the corresponding account number field blank.
  793. •If the fields are valid, including the account number fields, then the screen will be accepted. If one or more fields are invalid, an error message is displayed.
  794. •Show the equivalence partitions for each of the three pieces of information, indicating valid and invalid members.
  795. •Create test cases to cover these partitions, keeping in mind the rules about Combinations of valid and invalid members.
  796.  
  797.  
  798.  
  799. THEORY :
  800.  
  801. The first of the dynamic testing techniques are the specification-based testing techniques. These are also known as 'black-box' or input/output-driven testing
  802.  
  803. techniques because they view the software as a black-box with inputs and outputs, but they have no knowledge of how the system or component is structured inside the
  804.  
  805. box. In essence, the tester is concentrating on what the software does, not how it does it.
  806.  
  807. Functional testing is concerned with what the system does, its features or functions. Non-functional testing is concerned with examining how well the system does
  808.  
  809. something, rather than what it does. Non-functional aspects (also known as quality characteristics or quality attributes) include performance, usability, portability,
  810.  
  811. maintainability, etc.
  812.  
  813. Techniques to test these non-functional aspects are less procedural and less formalized than those of other categories as the actual tests are more dependent on the
  814.  
  815. type of system, what it does and the resources available for the tests.
  816.  
  817. The four specification-based techniques are:
  818. •Equivalence partitioning
  819. •Boundary Value Analysis
  820. •Decision Tables
  821. •State Transition Testing
  822.  
  823.  
  824.  
  825.  
  826. Equivalence partitioning (EP) is a good all-round specification-based black-box technique. It can be applied at any level of testing and is often a good technique to
  827.  
  828. use first. It is a common sense approach to testing, so much so that most testers practise it informally even though they may not realize it.
  829. The idea behind the technique is to divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should
  830.  
  831. handle them equivalently), hence 'equivalence partitioning'. Equivalence partitions are also known as equivalence classes - the two terms mean exactly the same thing.
  832.  
  833. The equivalence-partitioning technique then requires that we need test only one condition from each partition. This is because we are assuming that all the conditions
  834.  
  835. in one partition will be treated in the same way by the software. If one condition in a partition works, we assume all of the conditions in that partition will work,
  836.  
  837. and so there is little point in testing any of these others.
  838.  
  839. Conversely, if one of the conditions in a partition does not work, then we assume that none of the conditions in that partition will work so again there is little
  840.  
  841. point in testing any more in that partition. Of course these are simplifying assumptions that may not always be right but if we write them down, at least it gives
  842.  
  843. other people the chance to challenge the assumptions we have made and hopefully help to identify better partitions.
  844.  
  845. SOLUTION :
  846.  
  847. For the application-product field, the equivalence partitions are as follows:
  848.  
  849. #Partitions
  850. 1.Home equity loan
  851. 2.Home equity line of credit
  852. 3.Reverse Mortgage
  853.  
  854. The existing checking account information partitions are as follows:
  855.  
  856. #Partitions
  857. 1.Yes-Valid
  858. 2.Yes-Invalid
  859. 3.No-Blank
  860. 4.No-Nonblank
  861.  
  862. The existing savings account information partitions are as follows:
  863.  
  864. #Partitions
  865. 1.Yes-Valid
  866. 2.Yes-Invalid
  867. 3.No-Blank
  868. 4.No-Nonblank
  869.  
  870.  
  871.  
  872. Test Cases:
  873.  
  874. Inputs 1 2 3 4 5 6 7
  875. Product HEL LOC RM HEL LOC RM HEL
  876. Existing Checking? YES NO NO YES NO NO NO
  877. Checking Account Valid Blank Blank Invalid Nonblank Blank Blank
  878. Existing Savings No Yes No No No Yes No
  879. Accept? Yes Yes Yes No No No No
  880. Error? No No No Yes Yes Yes Yes
  881.  
  882. Product field partitions along with test cases
  883.  
  884. #Partition Test Case
  885. 1.Home equity loan 1
  886. 2.Home equity line of credit 2
  887. 3.Reverse Mortgage 3
  888.  
  889. Existing Checking account partitions along with test cases
  890.  
  891. #Partition Test Case
  892. 1.Yes-Valid 1
  893. 2.Yes-Invalid 4
  894. 3.No-Blank 3
  895. 4.No-Nonblank 6
  896.  
  897. Existing savings account partitions along with test
  898.  
  899. #Partition Test Case
  900. 1.Yes-Valid 2
  901. 2.Yes-Invalid 6
  902. 3.No-Blank 1
  903. 4.No-Nonblank 7
  904.  
  905. >Practical No. 07
  906.  
  907. AIM : PERFORMANCE TESTING
  908.  
  909. PERFORMANCE TESTING EXERCISE
  910.  
  911. Given the efficiency requirements in the HELLOCARM system requirements document, determine the actual points of measurement and a brief description of how you will
  912.  
  913. measure the following:
  914. 1. 040-010-050
  915. 2. 040-010-060
  916. 3. 040-020-010
  917. The results will be discussed in the next section.
  918.  
  919. PERFORMANCE TESTING EXERCISE DEBRIEF
  920.  
  921. 040-010-050 :
  922.  
  923. Credit-worthiness of a customer shall be determined within 10 seconds of request. 98% or higher of all Credit Bureau Mainframe requests shall be completed within 2.5
  924.  
  925. seconds of the request arriving at the Credit Bureau.
  926.  
  927. In this case, we are testing time behavior. Note that this requirement is badly formed in that there are two completely different requirements in one. That being said,
  928.  
  929. we should be able to use the same test to measure both.
  930.  
  931. The first, 2.5 seconds to complete the Credit Bureau request :
  932.  
  933. Ideally, this measurement would be taken right at the Credit Bureau Mainframe, but that is probably not possible given the location of it. Instead, we would have to
  934.  
  935. instrument the Scoring Mainframe and measure the timing of a transaction request to the Credit Bureau Mainframe against the return of that same transaction.
  936.  
  937. That would not be exact because it does not include transport time, but since we are talking about a rather large time (2.5 seconds), it would likely be close enough.
  938.  
  939. The second, 10 seconds for the determination from the Telephone Banker side: This measurement could be taken from the client side.
  940.  
  941. Start the timer at the point the Telephone Banker presses the Enter button and stop it at the point the screen informs the banker that it has completed. Note that in
  942.  
  943. this case, we would infer that the client workstation must be part of the loop, but since it is single threaded (i.e., only doing this one task), we would expect
  944.  
  945. actual client time to be negligible. So, our actual measurement could conceivably be taken from the time the virtual user (VU) sends the transaction to the time we get
  946.  
  947. a return on the wire. That would allow the performance tool to measure the time.
  948.  
  949. Clearly, this test would need to be run with different levels of load to make sure there is no degradation at rated load; 2,000 applications per hour in an early
  950.  
  951. release (040-010-110) and later at 4,000 applications per hour (040-010- 120).
  952.  
  953.  
  954. In addition, it would need to be run a fairly long time to get an acceptable data universe to calculate the percentage of transactions that met the requirements.
  955.  
  956. 040-010-060 :
  957.  
  958. Archiving the application and all associated information shall not impact the Telephone Banker's workstation for more than .01 seconds. Again, we are measuring time
  959.  
  960. behavior.
  961.  
  962. Because the physical archiving of the record is only a part of this test, this measurement would be made by an automation tool running concurrently with the
  963.  
  964. performance tool. Our assumption is that the way the requirement is worded; we want the Telephone Banker ready to take another call within the specified time period.
  965.  
  966. That means the workstation must reset its windows, clear the data, etc. while the archiving is occurring.
  967.  
  968. We would have a variety of scenarios that run from cancellation by the customer to declined applications to accepted. We would include all three types ofloans, both
  969.  
  970. high and low value. These would be run at random by the automation tool while the performance tool loaded down the server with a variety of loans.
  971.  
  972. The start of the time measurement will depend on the interface of HELLOCARMS. After a job has completed, the system might be designed to reset itselfor the user might
  973.  
  974. be required to signal readiness to start a new job. If the systemresets itself, we would start an electronic timer at the first sign of that occurring and stop it when
  975.  
  976. the system gives an indication that it is ready. If the user must initiate the reset by hand, that will be the trigger to start the timer.
  977.  
  978. 040-020-010 :
  979.  
  980. Load Database Server to no more than 20% CPU and 25% resource utilization average rate with peak utilization never more than 80% when handling 4,000 applications per
  981.  
  982. hour. This requirement is poorly formed. During review, we would hope that we would be able to get clarification on exactly what resources are being discussed when
  983.  
  984. specifying percentages. For the purpose of this exercise, we are going to make the following assumptions:
  985. ■25% resource utilization will be limited to testing for memory and disk usage.
  986. ■ Peak utilization applies to CPU alone.
  987.  
  988. This test will be looking at resource utilization, so we would monitor a large number of metrics directly on the server side for all servers :
  989.  
  990. ■Processor utilization on all servers supplying the horsepower
  991. ■ Available memory
  992. ■ Available disk space
  993. ■ Memory pages per second
  994. ■ Processor queue length
  995. ■ Context switches per second
  996. ■ Queue length and time of physical disk accesses
  997.  
  998.  
  999. ■ Network packets received errors
  1000. ■ Network packets outbound errors
  1001.  
  1002. The test would be run for an indeterminate time, ramping up slowly and then running at the rated rate (4,000 applications per hour) with occasional forays just above
  1003.  
  1004. the required rate.
  1005.  
  1006. After the performance test runs, we would graph out the metrics that had been captured from the Database Server. Had the server CPU reached 80% at any time, we would
  1007.  
  1008. have to consider the test failed. Averaging out the entire time the test had been running, we would look for memory, disk, and CPU usage on the Database Server to make
  1009.  
  1010. sure they averaged less than the rated value.
  1011.  
  1012. Note that this test points out a shortcoming of performance testing. In production, the Database Server could easily be servicing other processes beyond HELLOCARMS.
  1013.  
  1014. These additional duties could easily cause the resources to have higher utilization than allowed under these requirements.
  1015.  
  1016. In performance testing, it is always difficult making sure we are comparing apples to apples and oranges to oranges. We fear that without more information, we might
  1017.  
  1018. just be measuring an entire fruit salad in this test.
  1019.  
  1020. >Practical No. 8
  1021.  
  1022. AIM : TEST AUTOMATION USING SELENIUM IDE
  1023.  
  1024. Problem Statement :
  1025. Automate web test using Selenium IDE and Firebug.
  1026.  
  1027. Theory :
  1028.  
  1029. Selenium is a free (open source) automated testing suite for web applications across different browsers and platforms.
  1030. •Selenium is not just a single tool but a suite of software, each catering to different testing needs of an organization. It has four components.
  1031. •Selenium Integrated Development Environment (IDE)
  1032. •Selenium Remote Control (RC)
  1033. •WebDriver
  1034. •Selenium Grid
  1035.  
  1036.  
  1037.  
  1038. Selenium Integrated Development Environment (IDE) is the simplest framework in the Selenium suite and is the easiest one to learn. It is a Firefox plugin that you can
  1039.  
  1040. install as easily as you can with other plugins. However, because of its simplicity, Selenium IDE should only be used as a prototyping tool.
  1041.  
  1042. • If you want to create more advanced test cases, you will need to use either Selenium RC or WebDriver.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement