Thursday, September 11, 2014

Different Roles in Project Development



Hello friends…….
There are various roles involved while developing any software, in short project development.
 There is a series of roles that exist in most software development processes. In some cases, one team member may be filling many roles and some roles may be suppressed for a specific type of project, but all of these roles exist in one form or another in every software development project:   


Subject Matter Experts (SME)
The Subject Matter Expert is the person or persons from which requirements are captured. These are the people who know what the software needs to do and how the process works. The SME role is somewhat different from the other roles because it is constantly changing as new clients (internal or external) are brought in to help design a solution. SMEs are rarely from IT -- except when the solution is being designed to support IT. SMEs are most frequently the person who will receive the benefit of the system.
Functional Analysts (FA)
Functional Analysts have the unenviable task of eliciting clear, concise, non-conflicting requirements from the Subject Matter Experts who may or may not understand how technology can be used to transform the business processes in a positive way.
Solutions Architect (SA)
The technical architect is responsible for transforming the requirements created by the Functional Analysts into a set of architecture and design documents that can be used by the rest of the team to actually create the solution. The Solutions Architect is typically responsible for matching technologies to the problem being solved.
Development Lead (DL)
The Development Lead's role is focused on providing more detail to the Solution Architect's architecture. This would include detailed program specifications creation. The Development Lead is also the first line of support for the developers who need help understanding a concept or working through a particularly thorny issue.
Developer (Dev)
The heart and soul of the process, the developer actually writes the code that the Development Leads provided
Specifications for.
Quality Assurance (QA)
The Quality Assurance role is an often-thankless position that is designed to find bugs before they find their way to the end customers. Using a variety of techniques ranging from keying in data and playing with the system to formalized, automated testing scripts, the Quality Assurance team is responsible for ensuring the quality of the solution and it's fit to the requirements gathered by the Functional Analyst. Sometimes the QA team is known by their less flattering name of testers.

Deployment (Deploy)
The Deployment role is the one that packages up all of the compiled code and configuration files and deploys it through the appropriate environments or on the appropriate systems. The Deployment role is focused on getting the solution used. To that end, the role may include automated software installation procedures or may be as simple as copying the files to the appropriate place and running them.
Training
The Training role is responsible for documentation for the system as well as any instructor or computer-based training solutions that are designed to help the users better understand how the system works and what they can do with it.
Project Manager (PM)
The Project Manager is responsible for ensuring consistent reporting, risk mitigation, timeline, and cost control. The project manager role is a problem-solver role. They try to resolve problems while they are small so that they can be handled more quickly and with less cost.
Development Manager (DM)
The Development Manager is responsible for managing multiple priorities of conflicting projects. The Development Manager role is also an escalation for issues from the team, which it is unable to resolve internally. Of course, each organization has its own take on these roles; however, these are the roles you'll see most often in an organization doing development.
      Here I have tried to cover all the roles according to my knowledge that are involved project development.

Custom Columns in Quality Center

Sometimes the default fields in QC just don’t cut it. How many times have you been dealing with QC and wished you could filter by a particular something? Yest, you can! You can set up a custom column and now you can filter. Only the thing  is that you must have admin privileges for that project!

How is this done?

Simple!

                  In the top right hand corner, click Tools > Customize. The “Project Customization” page displays. Next, click “Customize Project Entities.” A list will display with the different fields sectioned off by topic. In this case, my problem area was DEFECT. Expand that dropdown, and 2 folders display. Select the User Fields folder. Down at the bottom of the screen is a New Field button. Clicking that creates the wonderful new column.

           Of course, now the column exists, you can name it and set the type and whether or not it’s required. I love using the Look-up List type. It works the same way as the Planned Closing Version or Assigned To drop downs.

             To add values, just select “Goto List.” A popup will display. Clicking new item, or rename item as there is a default 1, allows you to populate the newly created field. Whenever you are done, close and save.

            Now if you to Defects page and click New Defect, your new field is there. Select “Select Columns” and add the field to the Visible Columns list. The column will appear on the Defects page and is sortable.
 
So, that's how QC can be customized.
 
thanks

How to work with Reusable Test Cases ?

Reusable Test cases saves defining of test steps for same test action repeatedly. The purpose is to create the Test Case which you want to reuse. E.g. Login and Logout process in User registration business flow. Then explore the Test Case where you want to reuse the Test Case. & at last select the Test Case which you want to reuse in the Test Case Store tree.
Let us see step by step details-
Creating Test Parameters in a Design Step
    You can add a parameter to the description or expected results of a manual design step. You can use existing parameters, or define new parameters.
To insert a test parameter in a design step:
  1. In the Design Steps tab, double-click a design step. Place the cursor in the Description box or Expected Result box.
  2. Click the Insert Parameter button The Insert Parameter dialog box opens.
image
  1. To insert an existing parameter, select the parameter and click OK. The parameter is added to the step at the current cursor location, using the syntax <<<parameter name>>>.
image

Note: If you apply formatting to a parameter name in a design step, you must apply the same formatting to the entire parameter name, including the <<< and >>> characters. For example, if you want to italicize the parameter <<<password>>>, you must italicize the entire string <<<password>>> and not just the word password.
Designed test looks like –
imageNow the reusable test is defined.
In order to call this test for various steps in various test cases
  1. Click on Call to test as shown in below snap.
image
2. Select the reusable test from the test tree, as in below snap.
image
3. Once we click on Ok, Parameters to test window will open where we have to pass the values to the parameter defined.
image
4. Once all parameters to test are passed, it looks as below -
image

5. Click on Ok. That’s it, you have called the reusable test & passed values to the parameters.
image

What is Priority & Severity in Quality Center


Difference between Severity and Priority of a defect has been the most common question. “Priority” is associated with scheduling, and “severity” is associated with standards.

My interpretation of “Severity” is how severe the bug is with respect to the system, the user and the business. So this is the impact of the bug that you find, with respect to the system, which could be something that has minimal impact, such as a spelling issue in a paragraph of text, or something more severe such as an application or server error screen that appears when clicking a button/option.

Priority on the other hand, is the importance that the defect is fixed from the business perspective. Defect Priority determines the order in which defects will be  fixed/resolved and retested. So in effect, this is setting a level of importance in which  the defect should be fixed purely from the context of how it affects the business. I see this as being a business decision, so testers might not necessarily set this, or they might, but with the intention of updating this after discussions with the  Test Manager / other members.Business Owner of the solution impacted should prioritize the defect in terms of the urgency it needs to be fixed/resolved.

Once defect is raised it should be reviewed (whether it’s a valid issue and not a data discrepancy/user error, etc) and relevant severity should be specified coordinated by Defect Manager/Coordinator or self-assessed by QA Engineer.

Thursday, June 12, 2014