A website is a collection of Web pages, images, videos and other digital assets that is hosted on one or several Web server, usually accessible via the Internet, cell phone or a LAN.
The pages of websites can usually be accessed from a common root URL called the homepage, and usually reside on the same physical server. The URLs of the pages organize them into a hierarchy, although the hyperlinks between them control how the reader perceives the overall structure and how the traffic flows between the different parts of the sites.
A website requires attractive design and proper arrangement of links and images, which enables a browser to easily interpret and access the properties of the site. Hence it provides the browser with adequate information and functionality about the organization, community, network etc.
1.2 ABOUT THE PROJECT
The website has been developed for our college (SNGCE) in an effort to make it as attractive and dynamic as possible. Compared to the existing site a database has been added to our project.
The working of the project is as follows.
The first page provides several links. The Home page contains several information about the site like campus, management, facilities, infrastructure etc.
User Login module helps the user to login to the site. For that he must type the username and password correctly. The login provision in this page helps the already registered user to directly access the site and there is a link for registration to a user who is new to this site.
Member Registration module helps the new user to register into the site. The information entered by the users is added into the table registration.
In the Login link a recruiter can login using the appropriate Username and password, through which he can submit the required criteria for a student to appear for a placement drive. He can also post the number of vacancies that are available and the salary packages offered.
The flash news and the events corner display the latest developments, announcements and events associated with the college activities.
The administrator has the responsibility for displaying the recruiters form on the notice board, in response to which student can submit his willingness to attend the drive along with his resume.
System analysis is the process of gathering and interpreting facts, diagnosing problems and using the information to recommend improvements on the system. System analysis is a problem solving activity that requires intensive communication between the system users and system developers.
System analysis or study is an important phase of any system development process. The system is studied to the minutest detail and analyzed. The system analyst plays the role of an interrogator and dwells deep into the working of the present system. The system is viewed as a whole and the inputs to the system are identified. The outputs from the organization are traced through the various processing that the inputs phase through in the organization.
A detailed study of these processes must be made by various techniques like Interviews, Questionnaires etc. The data collected by these sources must be scrutinized to arrive to a conclusion. The conclusion is an understanding of how the system functions. This system is called the existing system. Now, the existing system is subjected to close study and the problem areas are identified. The designer now functions as a problem solver and tries to sort out the difficulties that the enterprise faces. The solutions are given as a proposal. The proposal is then weighed with the existing system analytically and the best one is selected. The proposal is presented to the user for an endorsement by the user. The proposal is reviewed on user request and suitable changes are made. This loop ends as soon as the user is satisfied with the proposal.
2.2 EXISTING SYSTEM
The existing college website is static which makes it less interactive. It doesn't have a database connectivity. Moreover students didn't have an access to the details of the college through the site, hence they were not updated about the latest events and placement drives.
2.3 PROPOSED SYSTEM
In order to make the site dynamic and more interactive we have tried to include a database link to our college website. Hence the recruiters have been provided with the facility to post their eligibility criteria, vacancies and salary packages. In response to which a student can submit his willingness to appear for the drive along with his personal details. Provision has also been made to display the latest events and announcements associated with the college online. We have developed our project using the three tier architecture which uses the following languages.
2.4 FEATURES OF SOFTWARES
VISUAL STUDIO .NET EDITIONS
2.4.1 ASP.NET - FRONT END
ASP.NET is not just a simple upgrade or the latest version of ASP. ASP.NET combines unprecedented developer productivity with performance, reliability, and deployment. ASP.NET redesigns the whole process. It's still easy to grasp for new comers but it provides many new ways of managing projects. Below are the features of ASP.NET.
• Easy Programming Model
ASP.NET makes building real world Web applications dramatically easier. ASP.NET server controls enable an HTML-like style of declarative programming that let you build great pages with far less code than with classic ASP. Displaying data, validating user input, and uploading files are all amazingly easy. Best of all, ASP.NET pages work in all browsers including Netscape, Opera, AOL, and Internet Explorer.
• Flexible Language Options
ASP.NET lets you leverage your current programming language skills. Unlike classic ASP, which supports only interpreted VBScript and J Script, ASP.NET now supports more than 25 .NET languages (built-in support for VB.NET, C#, and JScript.NET), giving us unprecedented flexibility in the choice of language.
• Great Tool Support
We can harness the full power of ASP.NET using any text editor, even Notepad. But Visual Studio .NET adds the productivity of Visual Basic-style development to the Web. Now we can visually design ASP.NET Web Forms using familiar drag-drop-double click techniques, and enjoy full-fledged code support including statement completion and color-coding. VS.NET also provides integrated support for debugging and deploying ASP.NET Web applications. The Enterprise versions of Visual Studio .NET deliver life-cycle features to help organizations plan, analyze, design, build, test, and coordinate teams that develop ASP.NET Web applications. These include UML class modeling, database modeling (conceptual, logical, and physical models), testing tools (functional, performance and scalability), and enterprise frameworks and templates, all available within the integrated Visual Studio .NET environment.
• Rich Class Framework
Application features that used to be hard to implement, or required a 3rd-party component, can now be added in just a few lines of code using the .NET Framework. The .NET Framework offers over 4500 classes that encapsulate rich functionality like XML, data access, file upload, regular expressions, image generation, performance monitoring and logging, transactions, message queuing, SMTP mail, and much more. With Improved Performance and Scalability ASP.NET lets we use serve more users with the same hardware.
• Compiled execution
ASP.NET is much faster than classic ASP, while preserving the "just hit save" update model of ASP. However, no explicit compile step is required. ASP.NET will automatically detect any changes, dynamically compile the files if needed, and store the compiled results to reuse for subsequent requests. Dynamic compilation ensures that the application is always up to date, and compiled execution makes it fast. Most applications migrated from classic ASP see a 3x to 5x increase in pages served.
• Rich output caching
ASP.NET output caching can dramatically improve the performance and scalability of the application. When output caching is enabled on a page, ASP.NET executes the page just once, and saves the result in memory in addition to sending it to the user. When another user requests the same page, ASP.NET serves the cached result from memory without re-executing the page. Output caching is configurable, and can be used to cache individual regions or an entire page. Output caching can dramatically improve the performance of data-driven pages by eliminating the need to query the database on every request.
• Enhanced Reliability
ASP.NET ensures that the application is always available to the users.
• Memory Leak, Dead Lock and Crash Protection
ASP.NET automatically detects and recovers from errors like deadlocks and memory leaks to ensure our application is always available to our users. For example, say that our application has a small memory leak, and that after a week the leak has tied up a significant percentage of our server's virtual memory. ASP.NET will detect this condition, automatically start up another copy of the ASP.NET worker process, and direct all new requests to the new process. Once the old process has finished processing its pending requests, it is gracefully disposed and the leaked memory is released. Automatically, without administrator intervention or any interruption of service, ASP.NET has recovered from the error.
• Easy Deployment
ASP.NET takes the pain out of deploying server applications. "No touch" application deployment. ASP.NET dramatically simplifies installation of our application. With ASP.NET, we can deploy an entire application as easily as an HTML page, just copy it to the server. No need to run regsvr32 to register any components, and configuration settings are stored in an XML file within the application.
• Dynamic update of running application
ASP.NET now lets we update compiled components without restarting the web server. In the past with classic COM components, the developer would have to restart the web server each time he deployed an update. With ASP.NET, we simply copy the component over the existing DLL, ASP.NET will automatically detect the change and start using the new code.
2.4.2 C#.NET - MIDDLE END
In brief, C#.NET a next generation of ASP (Active Server Pages) introduced by Microsoft. Similar to previous server-side scripting technologies, C#.NET allows us to build powerful, reliable, and scalable distributed applications. C#.NET is based on the Microsoft .NET framework and uses the .NET features and tools to develop Web applications and Web services.
Even though C#.NET sounds like ASP and syntaxes are compatible with ASP but C#.NET is much more than that. It provides many features and tools, which let you develop more reliable and scalable, Web applications and Web services in less time and resources. Since C#.NET is a compiled,. NET-based environment; we can use any .NET supported languages, including VB.NET, C#, JScript.NET, and VBScript.NET to develop C#.NET applications.
2.4.3 SQL SERVER 2000 - BACK END
SQL Server 2000 exceeds dependability requirements and provides innovative capabilities that increase employee effectiveness, integrate heterogeneous IT ecosystems, and maximize capital and operating budgets. SQL Server 2000 provides the enterprise data management platform our organization needs to adapt quickly in a fast-changing environment.
With the lowest implementation and maintenance costs in the industry, SQL Server 2000 delivers rapid return on the data management investment. SQL Server 2000 supports the rapid development of enterprise-class business applications that can give our company a critical competitive advantage.
Benchmarked for scalability, speed, and performance, SQL Server 2000 is a fully enterprise-class database product, providing core support for Extensible Markup Language (XML) and Internet queries.
• User-defined functions
SQL Server has always provided the ability to store and execute SQL code routines via stored procedures. In addition, SQL Server has always supplied a number of built-in functions. Functions can be used almost anywhere an expression can be specified in a query. This was one of the shortcomings of stored procedures—they couldn't be used inline in queries in select lists, where clauses, and so on. Perhaps we want to write a routine to calculate the last business day of the month. With a stored procedure, we have to exec the procedure, passing in the current month as a parameter and returning the value into an output variable, and then use the variable in our queries. If only we could write our own function that we could use directly in the query just like a system function. In SQL Server 2000, we have.
• Indexed views
Views are often used to simplify complex queries, and they can contain joins and aggregate functions. However, in the past, queries against views were resolved to queries against the underlying base tables, and any aggregates were recalculated each time we ran a query against the view. In SQL Server 2000 Enterprise or Developer Edition, we can define indexes on views to improve query performance against the view. When creating an index on a view, the result set of the view is stored and indexed in the database. Existing applications can take advantage of the performance improvements without needing to be modified.
• Distributed partitioned views
SQL Server 7.0 provided the ability to create partitioned views using the UNION ALL statement in a view definition. It was limited, however, in that all the tables had to reside within the same SQL Server where the view was defined. SQL Server 2000 expands the ability to create partitioned views by allowing us to horizontally partition tables across multiple SQL Servers. The feature helps to scale out one database server to multiple database servers, while making the data appear as if it comes from a single table on a single SQL Server. In addition, partitioned views can now be updated.
• New datatypes
SQL Server 2000 introduces three new datatypes. Two of these can be used as datatypes for local variables, stored procedure parameters and return values, user-defined function parameters and return values, or table columns:
bigint—An 8-byte integer that can store values from -263 (-9223372036854775808) through 263-1 (9223372036854775807).
sqlvariant—A variable-sized column that can store values of various SQL Server-supported data types, with the exception of text, ntext, timestamp, and sqlvariant.
The third new datatype, the table datatype, can be used only as a local variable datatype within functions, stored procedures, and SQL batches. The table datatype cannot be passed as a parameter to functions or stored procedures, nor can it be used as a column datatype. A variable defined with the table datatype can be used to store a result set for later processing. A table variable can be used in queries anywhere a table can be specified.
• Text in row data
In previous versions of SQL Server, text and image data was always stored on a separate page chain from where the actual data row resided. The data row contained only a pointer to the text or image page chain, regardless of the size of the text or image data. SQL Server 2000 provides a new text in row table option that allows small text and image data values to be placed directly in the data row, instead of requiring a separate data page. This can reduce the amount of space required to store small text and image data values, as well as reduce the amount of I/O required to retrieve rows containing small text and image data values.
• Cascading ri constraints
In previous versions of SQL Server, referential integrity (RI) constraints were restrictive only. If an insert, updates, or delete operation violated referential integrity, it was aborted with an error message. SQL Server 2000 provides the ability to specify the action to take when a column referenced by a foreign key constraint is updated or deleted. We can still abort the update or delete if related foreign key records exist by specifying the NO ACTION option, or we can specify the new CASCADE option, which will cascade the update or delete operation to the related foreign key records.
• Multiple SQL server instances
Previous versions of SQL Server supported the running of only a single instance of SQL Server at a time on a computer. Running multiple instances or multiple versions of SQL Server required switching back and forth between the different instances, requiring changes in the Windows registry.
SQL Server 2000 provides support for running multiple instances of SQL Server on the same system. This allows us to simultaneously run one instance of SQL Server 6.5 or 7.0 along with one or more instances of SQL Server 2000. Each SQL Server instance runs independently of the others and has its own set of system and user databases, security configuration, and so on. Applications can connect to the different instances in the same way they connect to different SQL Servers on different machines.
Extensible Markup Language has become a standard in Web-related programming to describe the contents of a set of data and how the data should be output or displayed on a Web page. XML, like HTML, is derived from the Standard Generalize Markup Language (SGML). When linking a Web application to SQL Server, a translation needs to take place from the result set returned from SQL Server to a format that can be understood and displayed by a Web application. Previously, this translation needed to be done in a client application.
• Log shipping
The Enterprise Edition of SQL Server 2000 now supports log shipping, which we can use to copy and load transaction log backups from one database to one or more databases on a constant basis. This allows you to have a primary read/write database with one or more readonly copies of the database that are kept synchronized by restoring the logs from the primary database. The destination database can be used as a warm standby for the primary database, for which we can switch users over in the event of a primary database failure. Additionally, log shipping provides a way to offload read-only query processing from the primary database to the destination database.
2.4.4 ADO.NET - DATABASE CONNECTIVITY
Most applications need data access at one point of time making it a crucial component when working with applications. Data access is making the application interact with a database, where all the data is stored. Different applications have different requirements for database access. ASP.NET uses ADO .NET (Active X Data Object) as it's data access and manipulation protocol which also enables us to work with data on the Internet.
• ADO.NET Data Architecture
Data Access in ADO.NET relies on two components: DataSet and Data Provider.
The dataset is a disconnected, in-memory representation of data. It can be considered as a local copy of the relevant portions of the database. The DataSet is persisted in memory and the data in it can be manipulated and updated independent of the database. When the use of this DataSet is finished, changes can be made back to the central database for updating. The data in DataSet can be loaded from any valid data source like Microsoft SQL server database, an Oracle database or from a Microsoft Access database.
2. Data Provider
The Data Provider is responsible for providing and maintaining the connection to the database. A DataProvider is a set of related components that work together to provide data in an efficient and performance driven manner. The .NET Framework currently comes with two DataProviders: the SQL Data Provider which is designed only to work with Microsoft's SQL Server 7.0 or later and the OleDb DataProvider which allows us to connect to other types of databases like Access and Oracle. Each DataProvider consists of the following component classes:
The Connection object which provides a connection to the database. The Command object which is used to execute a command. The DataReader object which provides a forward-only, read only, connected recordset. The DataAdapter object which populates a disconnected DataSet with data and performs update.
• Data access with ADO.NET can be summarized as follows:
A connection object establishes the connection for the application with the database. The command object provides direct execution of the command to the database. If the command returns more than a single value, the command object returns a DataReader to provide the data. Alternatively, the DataAdapter can be used to fill the Dataset object. The database can be updated using the command object or the DataAdapter.
• Component classes that make up the Data Providers
1. The Connection Object
The Connection object creates the connection to the database. Microsoft Visual Studio .NET provides two types of Connection classes: the SqlConnection object, which is designed specifically to connect to Microsoft SQL Server 7.0 or later, and the OleDbConnection object, which can provide connections to a wide range of database types like Microsoft Access and Oracle. The Connection object contains all of the information required to open a connection to the database.
2. The Command Object
The Command object is represented by two corresponding classes: SqlCommand and OleDbCommand. Command objects are used to execute commands to a database across a data connection. The Command objects can be used to execute stored procedures on the database, SQL commands, or return complete tables directly.
3. The DataReader Object
The DataReader object provides a forward-only, read-only, connected stream recordset from a database. Unlike other components of the Data Provider, DataReader objects cannot be directly instantiated. Rather, the DataReader is returned as the result of the Command object's ExecuteReader method. The SqlCommand.ExecuteReader method returns a SqlDataReader object, and the OleDbCommand.ExecuteReader method returns an OleDbDataReader object. The DataReader can provide rows of data directly to application logic when we do not need to keep the data cached in memory.
4. The DataAdapter Object
The DataAdapter is the class at the core of ADO .NET's disconnected data access. It is essentially the middleman facilitating all communication between the database and a DataSet. The DataAdapter is used either to fill a DataTable or DataSet with data from the database with its Fill method. After the memory-resident data has been manipulated, the DataAdapter can commit the changes to the database by calling the Update method. The DataAdapter provides four properties that represent database commands:
SelectCommand, InsertCommand, DeleteCommand and UpdateCommand
When the Update method is called, changes in the DataSet are copied back to the database and the appropriate InsertCommand, DeleteCommand, or UpdateCommand is executed.
• Phase 1: Classic
In the classic model, note how all layers are held within the application itself. This architecture would be very awkward to maintain in a large-scale environment unless extreme care was taken to fully encapsulate or modularize the code. Because Phase 1 of the Duwamish Books sample focuses on a small retail operation, this type of design is perfectly acceptable. It's easy to develop and, in the limited environment of a single retail outlet, easy to maintain.
In Phase 1, we deliver the basic functionality and documentation of the code and design
• Phase 2: Two-tier
Phase 2 moves to a two-tier design, as we break out the data access code into its own layer. By breaking out this layer, we make multiple-user access to the data much easier to work with. The developer does not have to worry about record locking, or shared data, because all data access is encapsulated and controlled within the new tier.
• Phase 3 and Phase 3.5: Logical three-tier and physical three-tier
The business rules layer contains not only rules that determine what to do with data, but also how and when to do it. For an application to become scalable, it is often necessary to split the business rules layer into two separate layers: the client-side business logic, which we call workflow, and the server-side business logic. Although we describe these layers as client and server-side, the actual physical implementations can vary. Generally, workflow rules govern user input and other processes on the client, while business logic controls the manipulation and flow of data on the server.
Phase 3 of the Duwamish Books sample breaks out the business logic into a COM component to create a logical three-tier application. Our second step in creating a three-tier application is to provide a physical implementation of the architecture. To distribute the application across a number of computers, we implement Microsoft Transaction Server in Phase 3.5. The application becomes easier to maintain and distribute, as a change to the business rules affects a smaller component, not the entire application. This involves some fairly lengthy analysis because the business rules in Phase 1 were deliberately not encapsulated.
• Phase 4: A Windows-based application
Phase 4 of the Duwamish Books sample is the culmination of the migration from a desktop model to a distributed n-tier model implemented as a Web application. In Phase 4, we offer three client types aimed at different browser types. We also break out the workflow logic from the client application. This logic is now implemented through a combination of ASP script, some client-side processing (depending on the client type), and a COM component. The Workflow component converts ADO Recordsets it receives from the Business Logic Layer component into XML data, which is, in turn, converted into HTML for presentation.
Phase 4 documents the benefits, architecture, and implementation issues relating to the migration of a three-tier application to a Web-based application
Performance has not been tuned for minimum system configuration. Increasing your RAM above the recommended system configuration will increase our performance, specifically when running multiple applications, working with large projects, or doing enterprise-level development.
2.6 SOFTWARE REQUIREMENTS
OPERATING SYSTEM BROWSER FRONT END
SERVER SIDE SCRIPTING CLIENT SIDE SCRIPTING
: WINDOWS XP
: INTERNET EXPLORER 6.0 OR ANY HTTP BROWSER : ASP.NET
: SQL SERVER 2000 : IIS
: JAVA SCRIPT
: TCP / IP
: HTTP, SMTP, POP3, WAP
2.7 HARDWARE REQUIREMENTS
: PENTIUM IV
: 2 GHZ
: 32 BIT
: 128 MB
: SVGA COLOR
: 108 KEYS
: 56 KBPS
: 1.44 MB
System design is the solution to the creation of a new system. This phase is composed of several systems. This phase focuses on the detailed implementation of the feasible system. It emphasis on translating design specifications to performance specification. System design has two phases of development logical and physical design.
During logical design phase the analyst describes inputs (sources), out puts (destinations), databases (data sores) and procedures (data flows) all in a format that meats the uses requirements. The analyst also specifies the user needs and at a level that virtually determines the information flow into and out of the system and the data resources. Here the logical design is done through data flow diagrams and database design.
The physical design is followed by physical design or coding. Physical design produces the working system by defining the design specifications, which tell the programmers exactly what the candidate system must do. The programmers write the necessary programs that accept input from the user, perform necessary processing on accepted data through call and produce the required report on a hard copy or display it on the screen.
3.1 TABLE DESIGN
Table 3.1.1 CONTACT MESSAGE
The overall objective in the development of database technology has been to treat data as an organizational resource and as an integrated whole. DBMS allow data to be protected and organized separately from other resources. Database is an integrated collection of data. The most significant form of data as seen by the programmers is data as stored on the direct access storage devices. This is the difference between logical and physical data.
Database files are the key source of information into the system. It is the process of designing database files, which are the key source of information to the system. The files should be properly designed and planned for collection, accumulation, editing and retrieving the required information.
The organization of data in database aims to achieve three major objectives: -
• Data integration.
• Data integrity.
• Data independence.
The proposed system stores the information relevant for processing in the MS SQL SERVER database. This database contains tables, where each table corresponds to one particular type of information. Each piece of information in table is called a field or column. A table also contains records, which is a set of fields. All records in a table have the same set of fields with different information. There are primary key fields that uniquely identify a record in a table. There are also fields that contain primary key from another table called foreign keys.
Normalization is a technique of separating redundant fields and braking up a large table in to a smaller one. It is also used to avoid insertion, deletion and updating anomalies. All the tables have been normalized up to the third normal form. In short the rules for each of the three normal forms are as below.
• First normal form
A relation is said to be in 1NF if all the under lying domain of attributes contain simple individual values.
• Second normal form
The 2NF is based on the concept of full functional dependency. A relation said to be in 2NF if and only if it is in 1NF and every non-key attribute is fully functionally dependent on candidate key of the table.
• Third normal form
The 3NF is based on the concept of transitive dependency. A relation in 2NF is said to be in 3NF if every non-key attribute is non-transitively
3.4 WEB FORM DESIGN
Web Forms are based on ASP.NET. Working with Web Forms is similar to working with Windows Forms. But the difference is that we will create Web pages with Web forms that will be accessible by a Web browser. Web Forms are Web pages that serve as the user interface for a Web application. A Web Forms page presents information to the user in any browser or client device and implements application logic using server-side code. Web Forms are based on the System.Web.UI.Page class. The class hierarchy for the page class is shown below.
3.4.1 COMPONENTS OF WEB FORMS
In Web Forms pages, the user interface programming is divided into two parts: the visual component (design page) and the logic (code behind page).
The visual element is the Web Forms page. The page consists of a file with static HTML, or ASP.NET server controls, or both simultaneously. The Web Forms page works as a container for the static text and the controls we want to display. Using the Visual Studio Web Forms Designer and ASP.NET server controls, we can design the form just like in any Visual Studio application.
The logic for the Web Forms page consists of code that we create to interact with the form. The programming logic is in a separate file from the user interface file. This file is the "code-behind" file and has an ".aspx.vb" (VB) or ".aspx.cs" (C-Sharp) extension. The logic we write in the code-behind file can be written in Visual Basic or Visual C#.
The code-behind class files for all Web Forms pages in a project are compiled into the project dynamic-link library (.dll) file. The .aspx page file is also compiled, but differently. The first time a user loads the aspx page, ASP.NET automatically generates a .NET class file that represents the page, and compiles it to a second .dll file. The generated class for the aspx page inherits from the code-behind class that was compiled into the project .dll file. When the user requests the Web page URL, the .dll files run on the server and dynamically produces the HTML output for your page.
3.5 HOME PAGE
The home page of a website is the first page that a user perceives upon entering the website url at the browser address area. The entire website depends on how the home page is designed which forms the platform for viewing other web forms. In short, a home page forms the abstract of the entire website.
The SNGCE website begins with an interactive home page in which a recruiter username and password can be entered. A validation is performed at the database to verify whether the recruiter is an already authorized user, if not a recruiter is allowed to sign in by filling up the necessary details on a form. The home page appears as given below.
3.6 LINKS AND WEBPAGES CODING
4.1 FEATURES OF LANGUAGE
• Microsoft Visual Studio .Net
Visual Studio .NET is a complete set of development tools for building ASP Web applications, XML Web services, desktop applications, and mobile applications. Visual Basic .NET, Visual C++ .NET, and Visual C# .NET all use the same integrated development environment (IDE), which allows them to share tools and facilitates in the creation of mixed-language solutions. In addition, these languages leverage the functionality of the .NET Framework, which provides access to key technologies that simplify the development of ASP Web applications and XML Web services.
• The .NET Framework
The .NET Framework is a multi-language environment for building, deploying, and running XML Web services and applications. It consists of two main parts:
1. Common Language Runtime
Despite its name, the runtime actually has a role in both a component's runtime and development time experiences. While the component is running, the runtime is responsible for managing memory allocation, starting up and stopping threads and processes, and enforcing security policy, as well as satisfying any dependencies that the component might have on other components. At development time, the runtime's role changes slightly; because it automates so much (for example, memory management); the runtime makes the developer's experience very simple, especially when compared to COM as it is today. In particular, features such as reflection dramatically reduce the amount of code a developer must write in order to turn business logic into a reusable component.
2. Unified programming classes
The framework provides developers with a unified, object-oriented, hierarchical, and extensible set of class libraries (APIs). Currently, C++ developers use the Microsoft Foundation
Classes and Java developers use the Windows Foundation Classes. The framework unifies these disparate models and gives C#.netand JScript programmer's access to class libraries as well. By creating a common set of APIs across all programming languages, the common language runtime enables cross-language inheritance, error handling, and debugging. All programming languages, from JScript to C++, have similar access to the framework and developers are free to choose the language that they want to use.
• Introduction to C#.NET
In brief, C#.NET a next generation of ASP (Active Server Pages) introduced by Microsoft. Similar to previous server-side scripting technologies, C#.NET allows us to build powerful, reliable, and scalable distributed applications. C#.NET is based on the Microsoft .NET framework and uses the .NET features and tools to develop Web applications and Web services.
Even though C#.NET sounds like ASP and syntaxes are compatible with ASP but C#.NET is much more than that. It provides many features and tools, which let we develop more reliable and scalable, Web applications and Web services in less time and resources. Since C#.NET is a compiled, .NET-based environment; we can use any .NET supported languages, including VB.NET, C#, JScript.NET, and VBScript.NET to develop C#.NET applications.
• Advantages of C#.NET
1. .NET Compatible
.NET compatibility feature of C#.NET provides applications to use the features provides by .NET. Some of these features are multi-language support, compiled code, automatic memory management, and .NET base class library.
We have choice to select a programming language. We can write Web applications using any .NET supported language, including C#, VB.NET, JScript.NET and VBScript.NET.
All C#.NET code is compiled, rather than interpreted, which allows early binding, strong typing, and just-in-time (JIT) compilation to native code, automatic memory management, and caching.
The .NET base class library (BCL) provides hundreds of useful classes. This library can be accessed from any. NET supported language.
2. Web Forms and Rapid Development
Web Forms allows you to build rapid Web GUI applications. Web Forms provides us web pages and server side controls. We can use web forms and server side controls in VS.NET similar to we write Windows applications. VS.NET provides Windows application similar drag and drop features, which allows us to drag server side controls on a page and set control properties and write event handlers by using wizard property page. The VS.NET framework writes code for us under the hood and our application is ready in no time. In most of the cases, we don't even need to know what wizards write for us under the hood.
3. Native XML Support and XML Web Services
XML is a vital part of entire .NET framework. . NET uses XML to store and transfer data among applications. The .NET base class library provides high-level programming model classes, which can be used to work with XML.
An XML Web service provides the means to access server functionality remotely. Web services use SOAP (Simple Object Access Protocol) to provide access to clients. Web services can be used to build different layers of distributed applications and we can use different layers remotely.
4. Databases and ADO.NET
ADO.NET is a new version of ADO (ActiveX Data Objects). Event though ADO.NET sounds like ADO, but it are a complete redesigned database access technology. ADO.NET allows us to access different kinds of databases using only one programming model. We must be familiar with DAO, ADO, ODBC, RDO and other database access technologies previous to ADO.NET. Each of these technologies had its own pros and cons. ADO.NET combines features of all of these techniques and provides a single higher level-programming model and hides all details for us. It makes our job much simpler and provides a way to write rapid development. See ADO.NET section of C# Corner for ADO.NET source code samples and tutorials.
5. Graphics and GDI+
GDI+ is an improved version of GDI (Graphics Device Interface) to write Windows and Web graphics applications. The .NET base class library provides GDI classes to write graphics applications. Using these classes not only we can write Windows applications, but we can also write Web graphics applications. See GDI+ section of C# Corner for sample applications and tutorials of GDI+.
6. Caching and State Management
One of the most important factors in building high-performance, scalable Web applications is the ability to store items, whether data objects, pages, or parts of a page, in memory the initial time they are requested. We can store these objects on the server or on the client machine. Storing data on a server or a client is called caching.
C#.NET provides two types of caching - page caching and request caching. We use request caching to improve code efficiency and to share common data across the pages and we use page caching to provide fast access to the Web applications from clients.
7. Enhanced Security
C#.NET provide us to authenticate and authorize users for our applications. We can easily remove, add to, or replace these schemes, depending upon the needs of our application.
8. Mobile Device Development
New addition to C#.NET, Mobile SDK allows us to write Web application that run on Wireless Application Protocol (WAP) and Wireless Mark-up Language (WML) and HDML compliant devices. We can download Mobile SDK from the following link: Here are many source code samples and tutorials on how to develop Mobile applications using Mobile .NET.
9. Messaging and Directory Services
C#.NET uses the Messaging services class library, which is a high-level programming wrapper for MSMQ messaging services.
The .NET base class library also contains class wrappers for Active Directory that enables you to access Active Directory Services Interface (ADSI), Lightweight Directory Access Protocol (LDAP), and other directory services through C#.NET applications.
10. Migration from ASP to C#.NET
Even though C#.NET syntaxes are similar to ASP, but C#.NET is a new designed model and more object oriented. ASP pages won't work without modifying it. The only advantages ASP developers will have is familiar code syntax's.
5.1 SYSTEM TESTING
Testing is a set activity that can be planned and conducted systematically. Testing begins at the module level and work towards the integration of entire computers based system. Nothing is complete without testing, as it is vital success of the system.
• Testing Objectives:
There are several rules that can serve as testing objectives, they are
1. Testing is a process of executing a program with the intent of finding an error
2. A good test case is one that has high probability of finding an undiscovered error.
3. A successful test is one that uncovers an undiscovered error.
If testing is conducted successfully according to the objectives as stated above, it would uncover errors in the software. Also testing demonstrates that software functions appear to the working according to the specification, that performance requirements appear to have been met.
There are three ways to test a program
1. For Correctness
2. For Implementation efficiency
3. For Computational Complexity.
Tests for correctness are supposed to verify that a program does exactly what it was designed to do. This is much more difficult than it may at first appear, especially for large programs.
Tests for implementation efficiency attempt to find ways to make a correct program faster or use less storage. It is a code-refining process, which reexamines the implementation phase of algorithm development.
Tests for computational complexity amount to an experimental analysis of the complexity of an algorithm or an experimental comparison of two or more algorithms, which solve the same problem.
• Testing Correctness
The following ideas should be a part of any testing plan:
1. Preventive Measures
2. Spot checks
3. Testing all parts of the program
4. Test Data
5. Looking for trouble
6. Time for testing
7. Re Testing
The data is entered in all forms separately and whenever an error occurred, it is corrected immediately. A quality team deputed by the management verified all the necessary documents and tested the Software while entering the data at all levels. The entire testing process can be divided into 3 phases
1. Unit Testing
2. Integrated Testing
3. Final/ System testing
5.1.1 UNIT TESTING
As this system was partially GUI based WINDOWS application, the following were tested in this phase
1. Tab Order
2. Reverse Tab Order
3. Fie ld len gth
4. Front end validations
In our system, Unit testing has been successfully handled. The test data was given to each and every module in all respects and got the desired output. Each module has been tested found working properly.
5.1.2 INTEGRATION TESTING
Test data should be prepared carefully since the data only determines the efficiency and accuracy of the system. Artificial data are prepared solely for testing. Every program validates the input data.
5.1.3 VALIDATION TESTING
In this, all the Code Modules were tested individually one after the other. The following were tested in all the modules
1. Loop testing
2. Boundary Value analysis
3. Equivalence Partitioning Testing
In our case all the modules were combined and given the test data. The combined module works successfully with out any side effect on other programs. Everything was found fine working.
5.1.4 OUTPUT TESTING
This is the final step in testing. In this the entire system was tested as a whole with all forms, code, modules and class modules. This form of testing is popularly known as Black Box testing or system testing.
Black Box testing methods focus on the functional requirement of the software. That is, Black Box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black Box testing attempts to find errors in the following categories; incorrect or missing functions, interface errors, errors in data structures or external database access, performance errors and initialization errors and termination errors.
The project report entitled "COLLEGE WEBSITE CREATION" has come to its final stage. The system has been developed with much care that it is free of errors and at the same time it is efficient and less time consuming. The important thing is that the system is robust. We have tried our level best to make the site as dynamic as possible. Also provision is provided for future developments in the system. The entire system is secured. This online system will be approved and implemented soon.
OVERVIEW OF VISUAL STUDIO 2005
Introduction to Visual Studio .NET
In February 2002, software developers and architects worldwide; were introduced to Visual Studio .NET and the Microsoft .NET Framework. This landmark release, four years in the making, offered a unified development environment and programming model for constructing a range of software solutions. With the recent launch of Visual Studio .NET 2003, customers gained the benefits of enhanced tool and framework functionality, as well as increased performance, security and scalability for building enterprise-critical software. Features of Visual Studio 2005
Making changes to your code like, "pulling a large stretch of inline code into its own method" or "converting a field to be a property." The Refactoring support makes this easy to do the key tenet of Extreme Programming created by Kent Beck is constant Refactoring. Under this programming model, we are developing code rapidly and iteratively, but to keep our code from becoming a jumbled mess, we must constantly Refactor. Refactoring is a C# only feature.
• Edit and Continue
Visual Basic has always been about Rapid Application Development (RAD). One key feature is the ability to fix runtime errors on the fly. With Visual Basic .NET 1.0 and Visual Basic .NET 1.1, this powerful feature wasn't included. This feature is on-board for Whidbey. If we run into an exception at runtime, we get an exception helper that provides tips for fixing common errors, but more importantly, we can edit the code, select F5, and it continues right where us left off. Edit and Continue is VB .NET only feature.
ClickOnce make it easy to install applications and provide ongoing updates (Self-Updating), rather than forcing to distribute new versions of application, can just deploy the portion of the application which has changed. In the .NET Framework 1.0 and 1.1, href-exes were not able to solve many deployment issues. Href-exes are also known as ''no-touch deployment, or zero impact deployment''.
Essentially, with versions 1.0/1.1, we can deploy an application to a Web server, allowing users to browse to the URL for the exe, as in: <a href="someapp.exe"> we can run me by clicking this link </a> When the user clicks the link, the application downloads to their Internet files cache and runs. To keep this from being a huge security hole, the application permissions are restricted based on the URL (Intranet applications get different permissions than Internet applications, for example), or other factors. This means that some applications no longer need to be deployed in the traditional sense; no more setup.exe or MSI
href-exes have a number of limitations
■ The .NET Framework must be pre-installed on the client machine.
■ There's no good way to bootstrap the .NET Framework down if it's not there.
■ Most non-trivial applications consist of the main .exe and a number of assembly files. With href- exes, the assembly files are downloaded on demand, which is great for corporate Intranet applications, but there's no way to download the application in one shot so that we know it can be safely used off-line.
■ Limited support for versioning.
■ The application doesn't hook into Add/Remove Programs, and the application doesn't install Start menu shortcuts.
The developed system is flexible and changes can be made easily. The system is developed with an insight into the necessary modification that may be required in the future. Hence the system can be maintained successfully without much rework.
One of the main future enhancements of our system is to include student record that facilitates quick and easy retrieval of student details. Scope has aloes be made to add a link to the library.