Developing Code using Visual Studio Team System
Microsoft has always offered best tools for developing code with high productivity. Continuing in the same tradition, Visual Studio 2005, working as client for Team Foundation Server, offers abundant features for developing quality code with high productivity. Not only does it allow code creation but also enables testing of that same code. In this article we will take an overview of some of those tools and concepts. Topics it covers are:
1. Class Designer – Class Diagram
3. Code Analysis
4. Unit Testing
5. Code Profiling – Performance Session
Let us begin this overview with the topic of creating the code. Generally, we begin coding with the skeletal code provided by architectural tools. usually we need to add some more classes for fulfilling the functionality requirements and quality of service requirements. We can create those classes in two different ways. One way is to start writing the class and its code, which is a traditional method. A new method provided by Visual studio 2005 is called “Class Designer”.
Class Designer provides a graphical interface for creation and edition of the various classes. Those classes may have got generated using the architectural tools or they may have been created by the developer. To open the class designer we can create a class diagram. We simply have to select the code file and select “View Class Diagram” from its context menu to create the class diagram. We may also add a class diagram from Add >> Add New Item in the context menu of the project. Once the class diagram is created it provides the class designer surface as shown with red border in Fig.1. Class diagram file has the extension of .cd. We can now add classes by dragging and dropping them from the toolbox, which is shown with blue border. We can also add existing classes by dragging and dropping them from the Class View as is shown with yellow border. We can provide the details of the class like methods, fields, properties and events from the class details window as is shown with green border.
In this diagram we can see that there are two classes and an interface which is created by re-factoring a class. The class “user” has private fields for User Id, User Name and Password, some properties which encapsulate those fields and constructor. Once the designing of the class is over, we can view its code by opening the concerned .cs file. It will show all the fields, properties, methods and events of the class. The code of the classes and the class diagram are in constantly synchronized state. Change in any one will show the modified view of the other immediately.
The code will look as shown in Fig.2
Refactoring is a formal and mechanical way to improve the code quality. The refactored code usually is more readable, reusable and maintainable. This code performs better and provides better scalability and reliability. Although there is no formal listing or standards of refactoring, there are certain well known principles on which code should be refactored. Prior to Visual Studio 2005, process of refactoring was a manual process which involved lot of cutting, pasting, copying and testing. Visual Studio 2005 provides automated and mechanical way of refactoring. Following is the list of refactoring principles as supported by Visual Studio 2005:
We are always interested in creating code which adheres to certain design principles, provides secure application, is easily maintained etc. When we create code we may not be able to pay proper attention to these desirable factors. After the code is created if we analyze and correct the code it will embody all the good factors which are mentioned earlier. In a non trivial application analyzing the code manually is a complex and in itself a non-trivial task. If it is automated then we need to only take the corrective actions. Prior to Visual Studio Team System there were third party tools which were able to be added in to the Visual Studio. One of them was FxCop which was very popular. Microsoft has added full functionally of FxCop in Visual Studio Team system.
We can do the static code analysis of the created code as well as create a check in policy which will do the static code analysis automatically when the code is desired to be checked in. Analyzing and correcting the code as early as it is generated will prevent a lot of other discrepancies from creeping in. For analyzing the code, we need to first set a policy which will be the guidelines for the Code analysis tool. This policy is set for every project with the help of property page for code analysis. Through this page we can set policy in the form of rules for following groups:
We can either toggle the Code Analysis to ON using the same property page or we can do the code analysis on demand. In the first case code analysis will happen every time code is compiled. In the second case it can be executed through the context menu of the project. For every non-conformity with the policy a warning will be shown in addition to the compiler errors.
We can set a check-in policy in such a way that whenever code is desired to be checked in, the code analysis will be executed.
Unit testing is to test the custom code unit immediately after it is created or even as a part of its creation process. The reason to do the unit testing is to catch the errors in the code at an early stage. In a complex application, earlier the errors and bugs are detected and removed, lesser are the efforts required to debug the entire application in more advanced stages. This improves the overall quality of the application being created. A custom code unit usually is a class in the application code. Test is a method in a specially adorned class which is executed by the test environment like VSTS.
Unit testing involves providing input to the method of the unit for which output is known. The results obtained by executing the methods are compared against known output values. If they match with each other then the test passes or else it fails which entails corrections in the applications custom code.
In Visual Studio Team System, during unit testing, we write a test class which has test methods. Test methods create instance of the class in the application code and call its method. Parameters to these methods are the values for which output is known. Unit Test in VSTS is created in the following two ways:
1. Creation using wizard
2. Authoring the code of the test
Creating the unit test through wizard provides many advantages. Some of them are listed below:
In VSTS unit testing every test starts in the Pass status. The test environment is notified of the test results through Asserts. Asserts denote ‘truth or what is believed to be truth’. In VSTS Assert class which is in Microsoft.VisualStudio.TestTools.UnitTesting namespace has many methods which are used to compare results and known output values.
In this way, Unit testing can be done by the developers to create quality code.
Code Profiling using Performance Session
A Performance Session allows the user to configure settings that determine how the application is profiled. Profile gives the idea about which methods are called maximum amount of time and where does the bottleneck to the performance occur. Code profile also stores reports that are generated for a session. A Performance Session is created by running the Performance Wizard or by manually creating a session. The Performance Session is visually represented in the Performance Explorer.
To view Performance Session properties, select the session in Performance Explorer, right-click it and then choose Properties.
The performance session has the following properties:
These settings allow you to choose between sampling and instrumentation, add .NET object collection and lifetime data, and specify the default report location and name.
These settings allow you to choose from a list of binaries and specify the launch order of binaries.
These settings allow you to choose the sample event and sampling interval. A sample event is used to collect performance data at the specified interval. For example, if the sample event is clock cycles and the sampling interval is set to 10,000,000 then performance data is collected after every 10 million clock cycles. The following four types of sample events are available:
• Clock Cycles - for CPU bound problems
• Page Faults - for memory related problems
• System Calls - for I/O related problems
• Performance Counters - for low-level performance problems
These settings allow you to specify whether you want to relocate the instrumented binary to another location. For example, if you are profiling My.DLL and chose not to relocate the instrumented binary, then, during profiling, a backup copy of My.DLL named My.Orig.DLL is created. Subsequently, My.DLL is modified by inserting probes to collect data. If you choose to relocate the instrumented binary, then the original binary is not renamed and the instrumented copy is placed at the specified location for use during instrumentation.
These settings allow you to specify any Pre-instrument and Post-instrument events that must occur as part of the instrumentation process.
These settings allow you to collect data about on-chip performance counters. For more information about on-chip performance counters, see the specific processor documentation.
During profiling, you can collect data from event trace providers. You can view the data by using the VSPerfReport.exe command line tool.
These settings allow you to list specific functions that you want to instrument. For example, to instrument a function called MyFunction, list it as -include:MyFunction in the Additional instrumentation options text box. Use the wildcard character '*' to specify multiple functions. For example, specify -exclude:MyNamespace::* to instrument all functions except those in the namespace MyNamespace.
Out of the two ways we can do profiling - sampling and instrumentation, sampling is less intrusive and does not put much load on the hardware so the application which is executing does not slow down. Instrumenting the code provides more accurate results but also puts more load on the hardware.
In this article we have seen how to use various features of Visual Studio Team Edition for Developers to create quality code with high productivity. We have seen tools like Class Designer, Refactoring, Static Code Analysis, Unit Testing and Code profiling.
This article has been editorially reviewed by Suprotim Agarwal.
C# and .NET have been around for a very long time, but their constant growth means there’s always more to learn.
We at DotNetCurry are very excited to announce The Absolutely Awesome Book on C# and .NET. This is a 500 pages concise technical eBook available in PDF, ePub (iPad), and Mobi (Kindle).
Organized around concepts, this Book aims to provide a concise, yet solid foundation in C# and .NET, covering C# 6.0, C# 7.0 and .NET Core, with chapters on the latest .NET Core 3.0, .NET Standard and C# 8.0 (final release) too. Use these concepts to deepen your existing knowledge of C# and .NET, to have a solid grasp of the latest in C# and .NET OR to crack your next .NET Interview.
Click here to Explore the Table of Contents or Download Sample Chapters!