- Download Windows Server R2 bit - You Windows World

- Download Windows Server R2 bit - You Windows World

Looking for:

Windows Server R2 Standard 64 Bit ISO.Windows Server Standard Edition (x64) Updates | ManageEngine Desktop Central 













































   

 

Windows Server Editions and System Requirements - Techotopia



 

Windows Server includes a variation of installation called Server Core. Server Core is a significantly scaled-back installation where no Windows Explorer shell is installed.

It also lacks Internet Explorer , and many other non-essential features. All configuration and maintenance is done entirely through command-line interface windows, or by connecting to the machine remotely using Microsoft Management Console MMC. Notepad and some Control Panel applets, such as Regional Settings, are available. Server Core can also be used to create a cluster with high availability using failover clustering or network load balancing.

Windows Server offers high availability to services and applications through Failover Clustering. Most server features and roles can be kept running with little to no downtime.

In Windows Server , the way clusters are qualified changed significantly with the introduction of the cluster validation wizard. With the cluster validation wizard, an administrator can run a set of focused tests on a collection of servers that are intended to use as nodes in a cluster.

This cluster validation process tests the underlying hardware and software directly, and individually, to obtain an accurate assessment of how well failover clustering can be supported on a given configuration. Hyper-V is hypervisor -based virtualization software, forming a core part of Microsoft's virtualization strategy.

It virtualizes servers on an operating system's kernel layer. It can be thought of as partitioning a single physical server into multiple small computational partitions. Hyper-V includes the ability to act as a Xen virtualization hypervisor host allowing Xen-enabled guest operating systems to run virtualized. Also, a standalone variant of Hyper-V exists; this variant supports only x architecture. It provides resource management and can be used to control the amount of resources a process or a user can use based on business priorities.

Process Matching Criteria , which is defined by the name, type or owner of the process, enforces restrictions on the resource usage by a process that matches the criteria. CPU time, bandwidth that it can use, number of processors it can be run on, and allocated to a process can be restricted. Restrictions can be set to be imposed only on certain dates as well. Server Manager is a new roles-based management tool for Windows Server Server Manager is an improvement of the Configure my server dialog that launches by default on Windows Server machines.

However, rather than serve only as a starting point to configuring new roles, Server Manager gathers together all of the operations users would want to conduct on the server, such as, getting a remote deployment method set up, adding more server roles etc. Support for the RTM version of Windows Server ended on July 12, , [3] [4] and users will not be able to receive further security updates for the operating system.

As a component of Windows Vista, Windows Server will continue to be supported with security updates, lasting until January 14, , the same respective end-of-life dates of Windows 7.

Microsoft planned to end support for Windows Server on January 12, However, in order to give customers more time to migrate to newer Windows versions, particularly in developing or emerging markets, Microsoft decided to extend support until January 14, Windows Server can be upgraded to Windows Server R2 on bit systems only.

Most editions of Windows Server are available in x and IA variants. As such, it is not optimized for use as a file server or media server. Windows Server is the last bit Windows server operating system. The Microsoft Imagine program, known as DreamSpark at the time, used to provide verified students with the bit variant of Windows Server Standard Edition, but the version has since then been removed.

However, they still provide the R2 release. Windows Server Foundation Released on May 21, Windows Server shares most of its updates with Windows Vista due to being based on that operating system's codebase.

A workaround was found that allowed the installation of updates for Windows Server on Windows Vista, [40] adding three years of security updates to that operating system Support for Windows Vista ended on April 11, , [41] while support for Windows Server ended on January 14, Due to the operating system being based on the same codebase as Windows Vista and being released on the same day as the initial release of Windows Vista Service Pack 1 , the RTM release of Windows Server already includes the updates and fixes of Service Pack 1.

Service Pack 2 was initially announced on October 24, [42] and released on May 26, Service Pack 2 added new features, such as Windows Search 4. Windows Server specifically received the final release of Hyper-V 1. Windows Vista and Windows Server share the same service pack update binary because the codebases of the two operating systems are unified - Windows Vista and Windows Server are the first Microsoft client and server operating systems to share the same codebase since the release of Windows Windows Server shipped with Internet Explorer 7 , the same version that shipped with Windows Vista.

Internet Explorer 9 was continually updated with cumulative monthly update rollups until support for Internet Explorer 9 on Windows Server ended on January 14, The latest supported version of the. NET Framework officially is version 4. Starting in March , Microsoft began transitioning to exclusively signing Windows updates with the SHA-2 algorithm. As a result of this Microsoft released several updates throughout to add SHA-2 signing support to Windows Server In June , Microsoft announced that they would be moving Windows Server to a monthly update model beginning with updates released in September [51] - two years after Microsoft switched the rest of their supported operating systems to that model.

With the new update model, instead of updates being released as they became available, only two update packages were released on the second Tuesday of every month until Windows Server reached its end of life - one package containing security and quality updates, and a smaller package that contained only the security updates.

Users could choose which package they wanted to install each month. Later in the month, another package would be released which was a preview of the next month's security and quality update rollup.

Installing the preview rollup package released for Windows Server on March 19, , or any later released rollup package, will update the operating system kernel's build number from version 6. The last free security update rollup packages were released on January 14, Windows Server is eligible for the Extended Security Updates program.

This program allows volume license customers to purchase, in yearly installments, security updates for the operating system until at most January 10, The licenses are paid for on a per-machine basis.

If a user purchases an Extended Security Updates license in a later year of the program, they must pay for any previous years of Extended Security Updates as well. Extended Security Updates are released only as they become available. A second release of Windows Server based on Windows 7, Windows Server R2 , was released to manufacturing on July 22, [55] and became generally available on October 22, It is the first server operating system by Microsoft to exclusively support bit processors, a move which would be followed by the consumer-oriented Windows 11 in Windows Server supports the following maximum hardware specifications: [61] [62] [63].

From Wikipedia, the free encyclopedia. Server operating system by Microsoft released in Screenshot of Windows Server showing the Server Manager application which is automatically opened when an administrator logs on.

Closed-source Source-available through Shared Source Initiative. See also: Features new to Windows Vista. Main article: Microsoft Cluster Server. Main article: Hyper-V. Main article: Windows System Resource Manager. See also: Features removed from Windows Vista. Main article: Internet Explorer 9. Main article: Windows Server R2. Standard: 4 Enterprise: 8 Datacenter: IA : 32 x64 : News Center.

Redmond, WA : Microsoft. Retrieved Retrieved April 12, January 14, Retrieved January 9, Forward Thinking. Windows Server Division WebLog. It is also commonly referred to as Vista Server. The preced- ing are some extremely basic examples of how, with a little study and a little practice, you can learn to enhance and streamline the processes by which you perform your regular Active Directory management tasks, using the tools provided in Windows Server R2.

Active Directory Administrative Center: Better Interactive Administration Of course, there are some administrators who are simply not comfortable working from the command line. Indeed, there are some who scarcely know it exists. However, the capabili- ties provided by the Active Directory Module for Windows PowerShell need not be lost on those who prefer a graphical interface.

The console works by taking the selections you make and the information you supply in the ADAC graphical interface and translating them into the proper command-line syntax, using the cmdlets in the Active Direc- tory Module. The program then executes the commands, receives the results, and displays the results in a graphical fashion. The Overview page provides access to the root of your domain, as well as basic functions, such as directory search and password reset.

As with most pages in ADAC, you can customize the appearance of the page, in this case by clicking the Add Content link and specifying which tiles should appear in the details pane. For anything else, you have to create the user first and then open its Prop- erties sheet to configure it, often switching between many different tabbed pages in the process. With ADAC, the Create User page, shown in Figure , contains a great many more configuration settings—in fact, more than can fit in this figure.

NOTE Not coincidentally, the list of configuration settings on the Create User page closely resembles the list of parameters for the New-ADUser cmdlet discussed earlier in this chapter. In addition to creating new Active Directory objects, ADAC also enables you to move, dis- able, rename, and delete objects, and configure their properties. Customizing the Interface ADAC includes a Tree View that you can use to browse your domain, in the style of Active Directory Users and Computers, but it also has a List View option, to which you can add your own navigation nodes, as shown in Figure Navigation nodes are essentially shortcuts that point to specific containers anywhere in your domain or in other domains.

Using the Add Navigation Nodes page, shown in Figure , you can browse your enterprise and select the containers you need to access on a regular basis. For AD DS installations that span multiple domains, or even multiple forests, administra- tors can manage objects in containers anywhere in the enterprise, as long as there are trusts in place between the domains or forests.

You can build complex queries by specifying the exact object criteria you want to search within, limiting the scope of the search to specific navigation nodes, and using the Lightweight Directory Access Protocol LDAP query syntax. Suppose, for example, you are managing a large, multidomain Active Directory installation, and you have to locate the user object of the vice president who just called to complain that he is locked out of his account.

You can then save the query for later reuse when the vice president locks himself out again. Introducing Active Directory Web Services ADAC might appear to be nothing more than a new management interface for Active Directory, but there is actually quite a bit that is new beneath the surface.

ADWS requires Microsoft. This is true not just in remote management scenarios, but for activities confined to the local system as well. If the ADWS service stops or fails to start, or you disable it, you will not be able to use Windows PowerShell or ADAC to manage the directory service, even when working at the domain controller console.

In a remote management scenario, no matter how you install the Active Directory Module for Windows PowerShell, the system will not be able to import the module successfully unless it has access to Active Directory Web Services on a computer running Windows Server R2. If the computer is not a member of a domain, or it is a member of a domain without a Windows Server R2 domain controller, you cannot use the Active Directory Module cmdlets to manage Active Directory. NOTE Although there has been no official announcement as of yet, it is rumored that Microsoft will eventually release a version of Active Directory Web Services for comput- ers running Windows Server and possibly earlier versions as well.

Unfortunately, this will be no benefit to administrators running Windows Server Server Core because the pre-R2 version of the operating system lacks the support for. These Web service protocols use SOAP, the native WCF mes- sage representation which at one time stood for Simple Object Access Protocol but, mysteri- ously, is no longer an acronym , to generate Extensible Markup Language XML code, which the system transmits over the network using an application layer or transport layer protocol.

If you prefer, you can also install the features using Windows PowerShell cmdlets or the Servercmd. Both modules require you to install the. After opening a Windows PowerShell session with elevated privileges by right-clicking the shortcut and selecting Run As Administrator , use the following command to import the ServerManager module: Import-Module ServerManager Once you have done this, you can install individual features by name using the Add- Windowsfeature cmdlet.

The cmdlet automatically installs all of the depen- dent elements the two features require. Here too you must open your command prompt session with elevated privileges, and then execute the following two commands, individually: Servercmd. Selecting Functional Levels in Windows Server R2 In Windows Server R2, as in all of the previous Windows Server releases since Windows , functional levels are essentially a version control system for domain controllers. Because all of the domain controllers in a domain and in some cases a forest have to communicate with each other, they must all be running the same Active Directory code to implement cer- tain new features.

When a Windows Server release adds new functionality to Active Directory, it is often necessary for all participating domain controllers to be running that same release. Raising a domain or a forest to a specific functional level prevents domain controllers not supporting the same functional level from joining the domain or the forest.

This ensures that all of the domain controllers support the same set of features. For example, if you create a new domain and specify that it use the Windows Server domain functional level, then any additional domain controllers you add to the domain must be running Windows Server or a newer version as well.

In the same way, if you set the forest functional level to Windows Server , all of the domains you create in that forest will operate at the Windows Server domain functional level.

Administrators can set functional levels while promoting a server to a domain controller using the Active Directory Domain Services Installation Wizard Dcpromo.

One you have raised a domain func- tional level or forest functional level, you cannot undo that action, except in certain highly specific circumstances.

When you select the Windows Server R2 forest functional level, the following modifi- cations apply: n All of the new domains you create in the forest will operate at the Windows Server R2 domain functional level by default. Active Directory will not permit you to add any domain controller running an operating system prior to Windows Server R2 to any domain in the forest.

Note, however, that this restriction affects only domain controllers, not member servers or workstations. This feature enables administrators to restore deleted Active Directory objects while Active Directory Domain Services is running. Using the Windows Server R2 Domain Functional Level If you select the Windows Server R2 forest functional level while creating a new for- est, you have no choice regarding the domain functional level because all of the domains in a Windows Server R2 forest must use the Windows Server R2 domain functional level.

This page enables you to select any functional level for the domain equal to or higher than the forest functional level setting. Although it might seem counterintuitive, it is possible to set the domain functional level higher than the forest functional level, and this is the only scenario in which it is possible to lower a functional level after you have raised it.

If your forest is set to the Windows Server forest functional level, you can raise your domain to the Windows Server R2 domain functional level, and then lower it back down to the Windows Server domain functional level, if necessary.

You cannot roll back the domain functional level to Windows Server , no matter what the value of the forest functional level. When you elevate the domain functional level to Windows Server R2, the domain controllers for the domain implement all of the features provided by the lower domain functional levels.

The information takes the form of a global group membership. This enables the system to grant users access to certain protected resources only when they meet specific authentication requirements, such as when they use a smart card or when the smart card they use has a certificate with 2,bit encryption.

At one time, when a user deleted an important file, it was necessary for an adminis- trator to restore it from a system backup. Microsoft then introduced the Recycle Bin feature to the Windows operating systems, which enables users to reclaim their deleted files them- selves. For years, administrators have requested a similar feature for Active Directory.

In Windows Server and earlier versions, it is possible to restore a deleted Active Directory object from a backup, but the process is daunting. After performing the restoration from the backup medium, you have to mark the object as authoritative, to ensure that it repli- cates to all of your domain controllers, and you have to do this in Directory Services Restore Mode, which means the domain controller must be offline. With Windows Server R2, however, we finally have a Recycle Bin for Active Directory that enables administrators to restore deleted objects with all of their attributes and permissions intact.

NOTE Another form of Active Directory object recovery, called tombstone reanima- tion, has also been available since the Windows Server release, and this recovery process does not require any server downtime. However, objects in their tombstone state lose some of their attribute values, so the recovered objects are lacking some of their properties.

Understanding Windows Server R2 Object Recovery On an installation using the Windows Server forest functional level or lower, when you delete an Active Directory object, it experiences a change of state, becoming a tombstone object and losing many of its attributes in the process.

With the Windows Server forest functional level and the Active Directory Recycle Bin enabled, deleting an object causes its state to change to logically deleted, with all of its attributes left intact. This is a new state in Windows Server R2, during which it is possible to restore the object without the loss of any properties or permissions. The system moves objects in this state to a Deleted Objects container and mangles their distinguished names so that they are not accessible by the usual means.

This is also a new state in Windows Server R2, and although objects in this state lose most of their attributes like tombstone objects, they are not recover- able at this point, using either the Recycle Bin or the authoritative restore process in Directory Services Restore Mode. TIP Administrators can change the lifetime values from their defaults by modifying the msDS-deletedObjectLifetime attribute for the deleted object lifetime, and the tombstone- Lifetime attribute for the recycled object lifetime.

Once you enable it, you cannot disable it again. You cannot use Recycle Bin to restore objects you deleted before you enabled Recycle Bin. These are already tombstone objects, and most of their attributes are irrevocably lost. After opening a session with elevated privileges, restoring deleted objects requires two cmdlets: Get-ADObject, to locate the desired object in the Deleted Objects folder, and Restore-ADObject, to perform the actual restoration.

When restoring multiple objects, and especially organizational units OUs that contain other objects, the order in which you restore the objects can be critical and the filter strings can be more complex. With the Active Directory Recycle Bin, you can only restore objects to a live parent. This means, for example, that if you accidentally delete an OU object, you must restore the OU itself before you can restore any of the objects in that OU.

If you delete an OU that contains other OUs, you must start by restoring the parent OU that is, the highest deleted OU in the hierarchy before you can restore the subordinate ones. TIP When restoring a hierarchy of objects, a series of exploratory Get-ADObject com- mands might be necessary to ascertain the correct order for the restorations.

In these cases, you might want to use commands that include the —Properties lastKnownParent parameter to determine parental relationships between the deleted objects. Many IT organi- zations prefer to install and configure their servers and workstations at a central location, and then deploy them to their final destinations.

In many cases, this means that the domain the computer will eventually join is not available at the time of the installation. The result is that IT personnel have to wait to join the computer to the domain until the system is on site, which is often an impractical solution.

The offline domain join capability in Windows Server R2 enables administrators to gather the information needed to join a computer running Windows Server R2 or Windows 7 to a domain and save it to the computer without it requiring access to the domain controllers. When the computer starts for the first time in its final location, it automatically joins to the domain using the saved information, with no interaction and no reboot necessary.

Once this is complete, you copy the file to the computer you want to join to the domain and run Djoin. The first computer, called the provisioning computer, must be running Windows Server R2 or Windows 7, and it must have access to a domain controller. By default, the domain controller must be running Windows Server R2. Optional parameters enable you to specify the name of an OU where you want to create the computer object, and the name of a specific domain controller to use.

To deploy the metadata on the target computer, which must also be running Windows Server R2 or Windows 7, you copy the file Djoin. The system does not have to have access to its eventual domain, or even be connected to a network. Once you have provisioned the computer, you can move it to its final location. The next time you restart the system, it will be joined to the domain you specified and ready to use.

To do the latter, you insert a reference to the metadata file that Djoin. Service Accounts Applications and services require accounts to access network resources, just as users do. These accounts are simple to manage, but they do have draw- backs. First, they are local accounts, which means administrators cannot manage them at the domain level. Second, these system accounts are typically shared by multiple applications, which can be a security issue.

It is possible to configure an application to use a standard domain account. This enables you to isolate the account security for a particular applica- tion, but it also requires you to manage the account passwords manually.

If you change the account password on a regular basis, you must reconfigure the application that uses it, so that it supplies the correct password when logging on to the domain.

The managed service account is a new feature in Windows Server R2 that takes the form of a new Active Directory object class. Because managed service accounts are based on computer objects, they are not subject to Group Policy—based password and account poli- cies as are domain users. Managed service policies also do not allow interactive logons, so they are an inherently more secure solution for applications and services.

Most importantly, managed service accounts eliminate the need for manual credential management. When you change the password of a managed service account, the system automatically updates all of the applications and services that use it.

To use a managed service account for a particular application or service, you must run the Install- ADServiceAccount cmdlet on the computer hosting the application. The BPA has a collection of predefined rules for each role it supports—rules specifying the recommended architectural and configurational parameters for the role.

For example, one AD DS rule recommends that each domain have at least two domain controllers. When you run a BPA scan, the system compares the recommendations to the actual role configura- tion and points out any discrepancies. The scan returns a status indicator for each rule that indicates whether the system is compliant or noncompliant.

There is also a warning status for rules that are compliant at the time of the scan, but that configuration settings might render noncompliant under other operational conditions. After a delay as the analyzer performs the scan, the results appear, as shown in Figure The analyzer then compares its preconfigured rules to the information in the XML file and reports the results. Although storage space is cheaper and more plentiful than ever before, the increased emphasis on audio and video file types, whether business related or not, has led to a storage consumption rate that in many instances more than equals its growth.

There is only one new role service in the File Services role, but there are innovative new features introduced into some of the existing role services. In an enterprise with multiple sites, increased storage capacity typically leads to increased consumption of bandwidth between sites, and these new features can help administrators manage this bandwidth consumption and improve file access times in the process.

Using the File Classification Infrastructure An enterprise network can easily have millions of files stored on its servers, and admin- istrators are responsible for all of them. However, different types of files have different management requirements.

Enterprise networks typically have a variety of storage tech- nologies to accommodate their different needs. For example, drive arrays using Redun- dant Array of Independent Disks RAID for fault tolerance are excellent solutions for business-critical files, but they are also more expensive to purchase, set up, and maintain.

Storing noncritical files on a medium such as this would be a waste. At the other end of the spectrum, an offline or near-line storage medium, such as magnetic tape or optical disks, can provide inexpensive storage for files that are not needed on a regular basis, or that have been archived or retired.

The big problem for the administrator with a variety of storage options is determining which files should go on which medium, and then making sure that they get there. However, determining which files require a certain treatment and seeing that they receive it can be a major administrative problem.

Traditional methods for classifying files include storing them in designated folders, ap- plying special file naming conventions, and, in the case of backups, the long-standing use of the archive bit to indicate files that have changed.

None of these methods are particularly efficient for complex scenarios on a large scale, however, because of the manual maintenance they require or their limited flexibility. Who is going to be responsible for making sure that files are named properly, or moved to the appropriate folders?

It would not be practical for IT personnel to monitor the file management practices of every user on the network. Also, if you designate one folder for files containing sensitive data and another for files that are modified often, what do you do with a file that is both sensitive and frequently updated?

Introducing the FCI Components The File Classification Infrastructure FCI introduced in Windows Server R2 is a system that enables administrators to define their own file classifications, independent of directory structures and file names, and configure applications to perform specific actions based on those classifications.

FCI consists of four components, as follows: n Classification Properties Attributes created by administrators that identify certain characteristics about files, such as their business value or level of sensitivity n Classification Rules Mechanisms that automatically apply classification properties to certain files based on specific criteria such as file contents n File Management Tasks Scheduled operations that perform specified actions on files with certain classification properties n Storage Reports Management Engine that can generate reports that, among oth- er things, document the distribution of classification properties on file server volume For example, an administrator might create a classification property that indicates whether a file contains personal or confidential information.

Also new is the File Management Tools node, which you use to execute specific actions based on the file classifications you have created. The Storage Report Management node now includes the ability to generate reports based on FCI properties, as well as other, traditional criteria.

FCI is designed to be more of a toolkit for storage administrators than an end-to-end solution. FCI provides various types of classification properties, but it is up to the individual administrator to apply them to the particular needs of an enterprise.

File Management Tools provides a basic file expiration function and the ability to execute custom commands against particular file classifications. However, FCI is also designed with an extensible infrastructure so that third-party developers can integrate property-based file selection into their existing products.

Creating FCI Classification Properties The first step in implementing FCI is to create the classification properties that you will apply to files with certain characteristics.

Classification properties are simple attributes, consisting only of a name, a property type, and sometimes a list of values.

Property types indicate the nature of the classification you want to apply to your files; they do not have to contain the classification criteria themselves. FCI supports seven classification property types, as listed in Table Aggregation refers to the behavior of a classification property type when a rule or other process attempts to assign the same property to a file, but with a different value.

An attempt to assign a second property value to an already-classified file results in an error. You can configure a rule to reevaluate files with these properties, but the rule will simply assign a new value that overwrites the old one, without considering the existing value of the property. When there is a value conflict, such as if one rule assigns a file High Security and another rule assigns it Low Security, the High Security value takes precedence, as shown on the left side of Figure , enabling the property to err on the side of caution and use the greatest possible security measures.

However, if you are seeking to categorize files based on subject, the Multiple Choice List property would probably be prefer- able, because it enables you to assign multiple properties to a single file, as shown on the right side of the figure. High Security 3. After specifying a name for the property, and optionally a description, you select a Property Type, and the controls change depending on the type you have chosen.

The types that do not support a selection of possible values Date-time, Num- ber, and String require no additional configuration. The other types enable you to add the possible values that your classification rules can assign to files, based on criteria you select. Creating FCI Classification Rules Once you have created your classification properties, you can assign them to your files by cre- ating classification rules.

On the Rule Settings tab, shown in Figure , you supply a name for the rule, and optionally a description, and then click Add to define the scope; that is, specify the volumes or folders containing the files to which you want to apply properties.

NOTE These classification mechanisms take the form of plug-in modules, of which Windows Server R2 includes only two relatively rudimentary examples. Microsoft has designed this part of the FCI to be extensible, so that administrators and third-party developers can use the FCI application programming interface API to produce their own classification plug-ins, as well as scripts and applications that set properties on files.

In the Property Name and Property Value fields, you specify which of your classification properties you want to assign to the files the rule selects, and what value the rule should insert into the property. Clicking Advanced displays the Additional Rule Parameters dialog box, in which you find the following tabs: n Evaluation Type Enables you to specify how the rule should behave when it en- counters a file that already has a value defined for the specified property.

You can elect to overwrite the existing property value or aggregate the values for properties that support aggregation. If you en- crypt files after they have classification properties assigned, they retain those properties and applications can read them, but you cannot modify the properties or assign new ones while the files are in their encrypted state.

Once you have created your classification rules, you must execute them to apply proper- ties to your files. You can click Run Classification With All Rules Now to execute your rules immediately, or you can click Configure Classification Schedule to run them at a later time or at regular intervals. TIP Administrators new to FCI have a tendency to create large numbers of properties and rules, simply because they can.

Be aware that processing rules, and especially those that search for complex regular expressions, can take a lot of time and consume a significant amount of server memory. Microsoft recommends only applying classifications that your current applications can utilize.

Performing File Management Tasks Once you have classified your files, you can use File Server Resource Manager to create file management tasks, which can manipulate the files based on their classification properties. Here again, the capabilities provided with Windows Server R2 are relatively rudimen- tary, but as with the classification mechanisms, administrators and third-party developers can integrate property-based file processing into their applications.

Here, as in the Classification Rule Definitions dialog box, you supply a name, a description, and a scope for the task. On the Action tab, you can select one of the following action types: n File Expiration Enables you to move files matching specified property values to another location n Custom Enables you to execute a program, command, or script on files matching specified property values, using the interface shown in Figure On the Condition tab, you specify the property values that files must possess for the file management task to process them, using the Property Condition dialog box, as shown in Fig- ure The Schedule tab enables you to configure the task to execute at specified intervals, and the Notification and Report tabs specify the types of information administrators receive about the task processing.

Although the File Expiration action type enables administrators to migrate files based on property values, it is the Custom action that provides true power for the savvy administrator. Using the Executable and Arguments fields, administrators can run a command, program, or script on the files having the specified properties.

Some of the possible scenarios for custom- ized tasks are as follows: n Modify the permissions for the selected files using Lcacls. Using BranchCache Branch office technologies were a major priority for the Windows Server R2 and Windows 7 development teams, and BranchCache is one of the results of that concentration. On an enterprise network, a branch office can consist of anything from a handful of work- stations with a virtual private network VPN connection to a fully equipped network with its own servers and IT staff.

In most cases, however, branch offices nearly always require some network communication with the home office, and possibly with other branches as well. The wide area network WAN connections between remote sites are by nature slower and more expensive than local area network LAN connections, and the primary functions of Branch- Cache are to reduce the amount of WAN bandwidth consumed by branch office file sharing traffic and improve access times for branch office users accessing files on servers at remote locations.

As the name implies, BranchCache is file caching software. Caching is a technique by which a system copies frequently used data to an alternative storage medium, so that it can satisfy future requests for the same data more quickly or less expensively. BranchCache works by caching files from remote servers on the local drive of a branch office computer so that other computers in the branch office can access those same files locally, instead of having to send repeated requests to the remote server.

BranchCache has two operational modes, as follows: n Distributed Cache Mode Up to 50 branch office computers cache files requested from remote servers on their local drives, and then make those cached files available to other computers on the local network, on a peer-to-peer basis. The primary difference between these two modes is that Hosted Cache Mode requires the branch office to have a server running Windows Server R2, whereas Distributed Cache Mode requires only Windows 7 workstations.

The advantage of Hosted Cache Mode is that the server, and therefore the cache, is always available to all of the workstations in the branch office. Workstations in Distributed Cache Mode can only share cached data with computers on the local network, and if a workstation is hibernating or turned off, its cache is obviously unavailable. This is because caching writes is a much more complicated operation than caching reads, due to possible existence of conflicts between multiple versions of the same file.

The BranchCache communication between the clients and the remote server proceeds as follows: 1. The only difference from a standard request is that the client includes an identifier in the message, indicating that it supports BranchCache. When the BranchCache-enabled remote server receives the request and recognizes that the client also supports BranchCache, it replies, not with the requested file, but with content metadata in the form of a hash describing the requested file, as shown in the following graphic.

The metadata is substantially smaller than the requested file itself, so the amount of WAN bandwidth utilized so far is relatively small. Step 1. Reply with Metadata Office Client 3. On a Distributed Cache Mode installation, the client sends this message as a multicast transmission to the other BranchCache clients on the network, using the BranchCache discovery protocol.

On a Hosted Cache Mode installation, the client sends the message to the local server that hosts the cache, using the BranchCache retrieval protocol. In Distributed Cache Mode, the client fails to receive a reply from another client on the network.

In Hosted Cache Mode, the client receives a reply from the local server indi- cating that the requested data is not in the cache, as shown in the following graphic.

Multicast with Metadata Step 4. Forwarded Metadata Step 4. Negative Reply Branch Office Server 5. The client retransmits its original file request to the remote server. This time, however, the client omits the BranchCache identifier from the request message. The remote server, on receiving a standard non-BranchCache request, replies by transmitting the requested file, as shown in the following graphic.

Step 5. Reply with File Office Client 7. The client receives the requested file and, on a Distributed Cache Mode installation, stores the file in its local cache. On a Hosted Cache Mode installation, the client sends a message to its local caching server using the BranchCache hosted cache proto- col, advertising the availability of its newly downloaded data.

Distributed Cache Mode Step 7. Client Advertises File Server Retrieves and Caches File Branch Office Server When another client subsequently requests the same data from the remote server, the communication process is exactly the same up until step 4.

In this case, the client receives a reply from another computer either client or server, depending on the mode indicating that the requested data is present in its cache. The client then uses the BranchCache retrieval protocol to download the data from the caching computer.

For this and subsequent requests for that particular file, the only WAN traffic required is the exchange of request messages and content metadata, both of which are much smaller than the actual data file. BranchCache is not installed by default on Windows Server R2; you must install one or both of the BranchCache modules supplied with the operating system, and then create Group Policy settings to configure them. To enable BranchCache for all three protocols, you must install both of the following two modules using Server Manager.

This setting enables the file server to transmit content metadata to qualified BranchCache clients instead of the actual files they request. When you enable Hash Publication for BranchCache, as shown in Figure , you can elect to allow hash publication for all file shares on the computer, or only for the file shares on which you explicitly enable BranchCache support.

Computers running Windows 7 have the BranchCache client installed by default. Enabling this setting without either one of the mode settings configures the client to cache server data on its local drive only, without accessing caches on other computers. The default setting is 80 ms. When you decrease the value, the client caches more files; increasing the value causes it to cache fewer files.

The default value is 5 percent. To facilitate this communication, administrators must configure any firewalls running on the clients to admit incoming traffic on the ports these two protocols use, which are Transmission Control Protocol TCP port 80 and User Datagram Protocol UDP port , respectively.

You must then provide the server with a certificate issued by a certification authority CA that the clients on the branch office network trust. This can be an internal CA running on the network or a commercial CA run by a third party. Note, however, that client configuration values you set using Group Policy take precedence over those you set with Netsh.

However, to do so, the namespace must be hosted on a server running Windows Server R2 or Windows Server If you enable access-based enu- meration on a DFS namespace and on the target shares that the namespace links to using the Share and Storage Management console , the shared folders are completely hidden from unauthorized users. Prior to the R2 release, you could only do this by manually changing the permissions on the replicated folder.

Note, however, that read-only folders impose an additional perfor- mance burden on the servers hosting them, because DFS Replication must intercept every Create and Open function call to determine if the requested destination is in a read-only folder. Since then, as anticipated, the IIS development team has been working on a variety of enhancements and extensions that build on that new architecture.

Although based on the same basic structure as IIS 7. This chapter lists the new features in IIS 7. Installing IIS 7. That dependency is still there, however.

The Microsoft Web Platform is an integrated set of servers and tools that enable you to deploy complete Web solutions, includ- ing applications and ancillary servers, with a single procedure. The Microsoft Web Platform Installer is a tool that enables you to select, download, install, and configure the features you want to deploy on your Web server. The Web Platform Installer file you download is a stub, a tiny file that enables you to select the modules you want to install and then to download them, using the interface shown in Figure The installer provides a selection of collaboration, e-commerce, portal, and blog applications, and enforces the dependencies between the various elements.

During the installation process, Web Platform Installer prompts you for information needed by your selected applications, such as what subdirectory to install them into, what passwords to use, and so on.

When the process is complete, you have a fully functional Web site, complete with IIS and applications and ready to use. Selecting a server, site, or application and clicking Export Application launches a wizard in which you can select the elements that you want to export, as shown in Figure The wiz- ard then creates a package in the form of a Zip file, which contains the original content plus configuration settings in Extensible Markup Language XML format.

The tool also includes a Remote Agent Service, which administrators can use to synchronize Web servers in real time over a network connection. This enables you to replicate sites and servers on a regular basis so that you can create Web farms for load balancing and fault toler- ance purposes. After installing the role service, you create an authoring rule that specifies what content you want to be able to publish and which users can publish it, using the interface shown in Figure Then, using a feature called the WebDAV redirector on the client computer, you map a drive to your Web site.

Copying files to that drive automatically publishes them on the Web site. However, Microsoft is releasing an up- dated version of the service, to synchronize its feature set with the version included with Windows Server R2.

It was created at a time when security was not as great a con- cern as it is now, and as a result, it has no built-in data protection of any kind. Clients transmit passwords in clear text, and transfer files to and from servers in unencrypted form.

Windows Server R2, however, has an FTP server implementation that is enhanced with better security measures and other new features. It requires you to install the old IIS 6. They have also included an additional role service, FTP Extensibility, which enables developers to use their own managed code to create customized authentication, authorization, logging, and home directory providers. However, Microsoft is releasing an updated version of the service to synchronize its feature set with the version included with Windows Server R2.

Hosting Applications with IIS 7. Server Core is a stripped-down version of the Windows Server operating system that eliminates many roles and features and most of the graphical interface. Because ASP. NET is one of the most commonly used development environments for Web applications today, this was a major shortcoming. However, Windows Server R2 provides support for. NET Framework 2. NET applications.

The ASP. Microsoft has also incorporated this capability into Windows Server Service Pack 2. NET 4. This feature enables an administrator to configure an application pool to start up automati- cally, while temporarily not processing HTTP requests.

This allows applications requiring extensive initialization to finish loading the data they need or to complete other processes before they begin accepting HTTP requests. The default value is 4, but in IIS 7. When IIS 7. You can also configure the property to terminate the FastCGI process when an error occurs. You can also use it for anonymous authentication in place of the IUSR account.

In IIS 7. Managing IIS 7. Windows Server R2 includes a number of IIS configuration tools that were previously available only as separate downloads, and Microsoft has enhanced many of the existing tools.

Once you have access to the IIS Windows PowerShell snap-in, you can display all of the cmdlets it contains by using the following command: Get-Command —pssnapin WebAdministration The snap-in uses three different types of cmdlets, as follows: n PowerShell provider cmdlets n Low-level configuration cmdlets n Task-oriented cmdlets These cmdlet types correspond to three different methods of managing IIS from the Windows PowerShell prompt, as described in the following sections.

By pip- ing the results of the Get-Item cmdlet to the Select-Object cmdlet, you can display all of the properties of a selected site, as shown in Figure Any module that includes a provider hierarchy must support them. Once within the IIS hierarchy, you can use low-level configuration cmdlets to manage specific IIS elements without having to type extended path names. This new architecture, carried over into the IIS 7. This extensibility complicates the process of developing a Windows PowerShell management strategy, however.

Cmdlets might have static parameters that enable them to manage specific properties of an element, but if a third-party developer creates an IIS exten- sion that adds new properties to that element, the existing cmdlets cannot manage them. Therefore, the IIS Windows PowerShell snap-in includes low-level configuration cmdlets that you can use to view and manage all of the hundreds of IIS configuration settings, includ- ing custom settings added by IIS extensions.

One set of task- oriented cmdlets, concerned with managing IIS sites, is as follows: n Get-Website n New-Website n Remove-Website n Start-Website n Stop-Website Unlike the low-level cmdlets, the task-oriented cmdlets do not rely on the IIS namespace although they can utilize it , and they use static parameters to configure specific properties.

Once you have created the site, you can even use the Windows PowerShell interface to create new content. For example, the ASP. Also accessible through the console are the features described in the following sections. Using Configuration Editor Configuration Editor is a graphical tool that enables administrators to view and manage any setting in any of the IIS configuration files.

Because the tool is based on the IIS configuration schema, it can even manage custom settings without any interface modifications. In addition, once you have performed your modifications, the Configuration Editor can generate a script that duplicates those modifications for execution on other servers.

You can configure a multitude of settings for the new site, after which it appears as part of the collection. Finally, back on the Configuration Editor page, clicking Generate Script in the Actions pane displays script code that will create a new site identical to the one you just added, using man- aged code C , JavaScript, or the Appcmd.

From this window, you can copy the code to a text file to save for later use. Request Filtering is essentially a graphical interface that inserts code into Web. Requests that the filtering mechanism rejects are logged with error codes that indicate the reason for the rejection.

The Request Filtering page, shown in Figure , contains seven tabs that enable you to create the following types of filters: n File Name Extensions Filters incoming HTTP requests based on the extension of the file requested. For example, this enables you to filter out requests for files in the bin folder with- out rejecting requests for files in the binary folder. This capability is particularly useful in preventing SQL injection attacks, in which query strings contain escape characters or other damaging code.

Using Configuration Tracing Starting in version 7. In Windows Server R2, configuration tracing is disabled by default. Clicking Scan This Role initiates the process by which the analyzer gathers information about IIS and compares it with a set of predefined rules. IIS conditions that differ substantially from the rules are listed in the analyzer as noncompliant results. As a result, there is a great deal to learn about it, and there are a great many extensions and add-ons available.

Both of these sites provide the latest IIS news, learning tools, community participation, and software downloads. In late , sales of mobile com- puters exceeded those of desktop computers for the first time.

Many of these mobile users require access to the internal resources of their corporate networks to perform their required tasks, and Microsoft provides a number of mechanisms that enable them to do so.

Virtual private networking can provide remote clients with complete access to the company intranet, and Network Policy Server helps administrators keep remote connec- tions safe and secure.

In Windows Server R2, Microsoft has enhanced these services with new features, and also has introduced a new remote connectivity service for R2 servers and Windows 7 clients called DirectAccess. Introducing DirectAccess A virtual private network VPN connection is a secure pipeline between a remote client computer and a network server, using the Internet as a conduit.

When the client estab- lishes the VPN connection with the server, it uses a process called tunneling to encapsu- late the intranet traffic within standard Internet packets. With VPNs, the user on the client computer must explicitly launch the connection to the server, using a process similar to establishing a dial-up networking connection. Depending on the server policies, this can take several minutes. If the client loses its Internet connection for any reason, such as wandering out of a wireless hot spot, the user must manually reestablish the VPN connection.

DirectAccess, by contrast, uses connections that the client computer establishes auto- matically and that are always on. Users can access intranet resources without any deliberate interaction, just as though they were connected directly to the corporate network. As soon as the client computer connects to the Internet, it begins the DirectAccess connection process, which is completely invisible to the user.

By the time the user is logged on and ready to work, the client can have downloaded e-mail and mapped drives to file server shares on the intra- net. DirectAccess not only simplifies the connection process for the user, it also benefits the network administrator.

DirectAccess connections are bidirectional, and Windows 7 clients establish their computer connections before the user even logs on to the system. This enables administrators to gain access to the client computer at any time so they can apply Group Policy settings, deploy patches, or perform other upgrade and maintenance tasks. Some of the other benefits of DirectAccess are as follows: n Intranet detection The DirectAccess client determines whether the computer is connecting directly to the corporate network or accessing the network remotely and behaves accordingly.

Users can authenticate with smart cards or biometric devices. In Di- rectAccess, clients send intranet traffic through the tunnel, while the Internet traffic bypasses the tunnel and goes directly to the Internet. This is called split-tunnel routing. The latter feature is why DirectAccess relies so heavily on IPv6 for its connectivity.

Client computers can use the same IPv6 addresses wherever they happen to be in the world. Unfortunately, many networks still use IPv4, including the Internet. Therefore, DirectAccess includes support for a number of IPv6 transition technologies, which are essentially protocols that enable computers to transmit IPv6 packets over an IPv4 net- work. DirectAccess uses IPsec to authenticate client computers and users, and to ensure that the private intranet data that clients and servers transmit over the Internet remains private.

IPsec provides end-to-end security, meaning that only the source and final destination systems can read the contents of the encrypted data packets. This also means that intermediate systems— the routers that forward packets through the Internet to their destinations—do not have to support IPsec. When a client connects to a DirectAccess server, it creates two separate IPsec tunnels. With this access, the client can download Group Policy objects and initiate the user authentication process.

The client then uses the second connection to authenticate the user account and access the intranet resources and application servers. In transport mode, IPsec provides protection for the application data that IP datagrams carry as their payload. In tunnel mode, IPsec protects the entire IP datagram, including the header and the payload.

DirectAccess uses the ESP protocol for its authentication and encryption capabilities. The degree to which your intranet and the computers on it support IPv6 and IPsec is a critical factor in how you will deploy DirectAccess on your enterprise network.

DirectAccess clients and servers, which must run Windows 7 or Windows Server R2, all have full sup- port for IPsec connections using IPv6, but your application servers might not. Understanding the DirectAccess Connection Process The process by which a DirectAccess client establishes a connection to a DirectAccess server, and thereby to the company intranet, is a complicated one. However, the process is com- pletely invisible to the user on the client computer.

The individual steps of the connection process are as follows: 1. The client attempts to connect to a designated Web server on the intranet. The avail- ability of the Web server indicates that the client is directly connected to the intranet. The inability to access the Web server indicates that the client is at a remote location. The client then proceeds to initiate a DirectAccess connection to access the intranet.

The client establishes its first connection to the DirectAccess server on the intranet. By default, the client attempts to connect using IPv6 and IPsec natively, but if an IPv6 connection is not available such as when the client is connected to the IPv4 Internet , it uses 6to4 or Teredo, depending on whether the computers have public or private IPv4 addresses.

Once the client is connected to the DirectAccess server, the two computers authen- ticate each other using their respective computer certificates. Once the computer authentication is complete, the client has access to the domain controller and the DNS server on the intranet.

The process up to this point can occur before the user logs on to the client computer. The client establishes its second connection to the DirectAccess server and, using the domain controller access it obtained from the first connection, performs a standard AD DS user authentication, using NTLMv2 credentials and the Kerberos V5 authentica- tion protocol. The DirectAccess server authorizes the client to access intranet resources by checking the AD DS group memberships for the computer and the user.

The DirectAccess server begins functioning as a gateway between the client computer and the application servers and other resources the client is authorized to use. The following sections provide a high-level overview of the deployment process. Choosing an Access Model The access model you choose for your DirectAccess deployment specifies where on your intranet the IPsec encryption will terminate and how the traffic to and from the client will proceed once it passes through the DirectAccess server.

The basic architecture of a DirectAc- cess deployment is shown in Figure The client is at a remote location, typically connected to the Internet. The corporate intranet, protected behind a firewall, has a DirectAccess server on a perimeter network, which makes it directly accessible from the Internet using a public IP address.

Clients connect to the DirectAccess server, and the server forwards their traffic to the other resources on the intranet. There are three access models supported by DirectAccess, as follows: n End-to-end In this model, DirectAccess clients establish transport mode ESP con- nections that go through the DirectAccess server and all the way to the individual application servers on the intranet, as shown in the following graphic.

This is the ideal solution from a security standpoint, but it requires all of the application servers to sup- port IPsec connections using IPv6. The IPsec gateway server then forwards the cli- ent traffic, now protected by IPsec, to the application servers on the intranet, as shown in the following graphic. This model keeps IPsec traffic off of the intranet and enables you to use application servers that run Windows Server , or any other operating system that supports IPv6.

DirectAccess DirectAccess Application Client Server Server Encrypted Unencrypted n Modified end-to-edge This model is identical to the end-to-edge model, except that it uses an additional IPsec tunnel that authenticates clients at the application server. Client traffic is therefore encrypted only as far as the IPsec gateway server, but it is authenticated all the way to the application server, as shown in the following graphic. The need for this additional authentication also makes it easier for administra- tors to limit client access to specific application servers.

To use this model, application servers must be running Windows Server R2. If you have IPv6-capable applications or services running on Windows Server servers, DirectAccess clients can reach them only if you use the end-to-edge or modified end-to-edge access model.

If you have applications or services that only support IPv4 on your Windows Server servers, DirectAccess clients can only reach them if you use the end-to-edge or modified end-to- edge access model and have a NAT-PT device installed on your intranet.

 


(PDF) Windows Server R2 e-book | carlos amador - .



  Purchase and download the full PDF and ePub versions of this eBook only $ Before embarking on the installation of Windows Server , it is important. Windows Server is the fourth release of the Windows Server operating system produced by Microsoft as part of the Windows NT family of the operating. This download comes as a pre-configured VHD. This download enables you evaluate Microsoft Windows Server    

 

Windows Server R2 ISO Download for VirtualBox/PC & Install! - Item Preview



    When I click on that link there isn't a download options that I see If you're clicking on the archive. The iSCSI target must be connected to all nodes that will be using the xtandard. To use a managed service account for a particular application or service, you must run the Install- ADServiceAccount cmdlet on the free software for windows free hosting the application. With Windows Windows server 2008 standard download free R2 and DirectAccess, if the client is running Windows 7, the remote user has seamless, always-on remote access to corporate resources that does not compromise the secure aspects of remote connectivity. The default value is 4, but in IIS 7.


Comments

Popular posts from this blog

Microsoft windows 8 download free trial free. Please wait while your request is being verified...

Adobe premiere pro cc 2017 user guide pdf free -

How To Install Windows 10 On Mac For Free (inc. M1 Macs).