Search This Blog

Wednesday, May 23, 2012

Synchronize IIS

This quick guide will guide you through the process of using the Web Deployment Tool to synchronize a Web site on an IIS source computer to an IIS destination computer. You can do this by "pushing" data to a remote destination, or by "pulling" data from a remote source. This guide will show both methods, as well as an option to use a package file so that you do not have to install the Web Deployment Agent Service (MsDepSvc, or "remote agent service".)
What are the ways you can synchronize using the Web Deployment Tool?
  • Push (synchronize from a local source to a remote destination)
  • Pull (synchronize from a remote source to a local destination)
  • Independent Sync (initiate a synchronization from a computer where both destination and source are remote)
  • Manual Local Sync (create a package file of the source and copy it to the destination, then run it locally)

Prerequisites

This guide requires the following prerequisites:
  • .NET Framework 2.0 SP1 or greater
  • Web Deployment Tool 1.1
Note: If you have not already installed the Web Deployment Tool, see Installing and Configuring Web Deploy.

Part 1 - View your site's dependencies

1. Get the dependencies of the Web site by running the following command:
msdeploy -verb:getDependencies -source:apphostconfig="Default Web Site"
2. Review the output of the dependencies and look for any script maps or installed components that are in use by the site. For example, if Windows Authentication is in use by the Web site, you will see <dependency name="WindowsAuthentication" />.
3. If your site is inheriting any script maps, these will not be listed in the dependencies and you should also review the script maps for your site manually.
4. Compile a list of the components needed on the destination.
For detailed steps on analyzing the output of getDependencies, see Viewing Web Site Dependencies.

Part 2 - Configure the target (destination)

1. Review the list of dependencies and install them on the destination server.
For example, let’s assume you had the following in use for your Web site:
• ASP.NET
• Windows Authentication
• Anonymous Authentication
Based on analyzing your dependencies, you would install those components on the destination server before performing the synchronization.

Part 3 – Synchronize your site to the target

1. Always make a backup of the destination and source servers. Even if you are just testing, it allows you to easily restore the state of your server. Run the following command to backup an IIS 7 or above server:
%windir%\system32\inetsrv\appcmd add backup "PreMsDeploy"
2. Install the remote agent service on the source or the destination depending on if you want to "pull" the data from a remote source or "push" the data to a remote destination.
3. Start the service on the computer.
net start msdepsvc 
4. Run the following command to validate what would happen if the synchronization were run. The -whatif flag will not show every change; it will just show an optimistic view of what might change if everything succeeds (for example, it won't catch errors where you can't write to the destination.)
Pushing to remote destination, running on source computer (the computerName argument identifies the remote destination computer).
msdeploy -verb:sync -source:apphostconfig="Default Web Site" -dest:apphostconfig="Default Web Site",computername=Server1 -whatif > msdeploysync.log
Pulling from a remote source, running on destination machine (the computerName argument identifies the remote source computer).msdeploy -verb:sync -source:apphostconfig="Default Web Site",computername=Server1 -dest:apphostconfig="Default Web Site" -whatif > msdeploysync.log
5. After verifying the output, run the same command again without the -whatif flag:
Pushing to remote destination, running on source machine
msdeploy -verb:sync -source:apphostconfig="Default Web Site" -dest:apphostconfig="Default Web Site",computername=Server1 > msdeploysync.log
Pulling from a remote source, running on destination machinemsdeploy -verb:sync -source:apphostconfig="Default Web Site",computername=Server1 -dest:apphostconfig="Default Web Site" > msdeploysync.log

{Optional - Synchronize your site to the target by using a package file}

If you don't wish to use the remote service, you can use a package (compressed file) instead.
1. Run the following command on the source server to create a package of the Web site for synchronization:
msdeploy -verb:sync  -source:apphostconfig="Default Web Site" -dest:package=c:\site1.zip
2. Copy the package file to the destination server.
3. Run the following command on the destination server to validate what would happen if the synchronization were run:
msdeploy -verb:sync -source:package=c:\site1.zip -dest:apphostconfig="Default Web Site" -whatif > msdeploysync.log
4. After verifying the output, run the same command again without the -whatif flag:
msdeploy -verb:sync -source:package=c:\site1.zip -dest:apphostconfig="Default Web Site" > msdeploysync.log

You are now done synchronizing your site. To verify, test browsing to the Web site on the destination server. For troubleshooting help, see Troubleshooting Web Deploy.

Summary

You have now synchronized a web site from a source IIS server to a destination IIS server, including viewing the dependencies, configuring the destination IIS server and performing the synchronization.
faith_a

Monday, May 21, 2012

Cài đặt Lotus Notes 8.5.3 ( Notes Client, Domino Administration)

(Setup Lotus Notes Client and Domino Administrator Client 8.5.3)
1. Yêu cầu phần cứng
Khoảng trống đĩa: 1GB hoặc nhiều hơn
Bộ nhớ RAM: 512 MB / Windows XP - khuyên dùng 1GB,  1GB / Windows 7 - khuyên dùng 1.5 GB hoặc nhiều hơn
Bộ vi xử lý: Intel Pentium 4, 1.2 GHz or higher and compatibles, or equivalents
2. Yêu cầu hệ điều hành
Từ Windows XP trở lên
3. Yêu cầu trình duyệt
IE 6 full upate
Firefox 3.5
4. Hướng dẫn cài đặt Notes Client và Administrator Client
Duyệt đến thư mục chứa bộ cài Lotus Notes 8.5.3 chạy file setup.exe
Cửa sổ Lotus Notes 8.5.3 hiện ra, nhấp chuột vào Next

Đọc thỏa thuận bản quyền, sau đó chọn I accept the tearms in the license agreement, tiếp nhấn Next


Tiếp, chọn đường dẫn cài đặt Notes Client, sau đó nhấn Next


Chọn các thành phần cần cài đặt, ở đây tôi chọn Notes Client và Domino Administrator, sau đó nhấn Next


Nhấn Install để tiến hành cài đặt


Nhấp chuột vào Finish để hoàn tất quá trình cài đặt


4 Reasons ReFS (Resilient File System) is Better Than NTFS

Overview

Resilient File System (ReFS) is a new file system introduced in Windows Server 2012. Initially, it is being targeted for implementation as a file system that is primarily used for file servers. However, starting as the file system for a file server is just the beginning. Like its predecessor, NTFS, ReFS will begin as a file server system, then become a mainstream file system. Before long, we will all be using ReFS on our boot partitions.
So why would you want to change file systems? If NTFS is working, why should anybody even consider switching to ReFS? ReFS is better and faster in many ways than NTFS, but in one way more than all others: its resiliency.
Resilient File System will likely replace NTFS completely within the next versions of Windows, and here are some reasons why you are going to really love the new file system.

4) ReFS Supports Long File Names and File Path. Really Long.

Capacity is just one of the ways that ReFS is making changes. There will no longer be a limitation of 255 characters for a long file name. A file name in ReFS can be up to 32,768 unicode characters long! The limitation on full path size has also been updated from 255 characters for the total path size to 32K (32,768).
The legacy 8.3 naming convention is no longer stored as part of the file data. There is only one file name, and it can be a very long name.
Other changes have increased the capacity as well, though it is unlikely that the maximum size of a single volume will impact a real person. NTFS already had a maximum volume size of 16 Exabytes. The ReFS format allows a maximum volume size of 262,144 Exabytes.

3) ReFS is Much Better at Handling Power Outages

NTFS stores all of its file information in metadata. The filename is stored in the metadata. The location on the hard disk is stored in the metadata. When you rename a file, you’re changing the metadata. Likewise, ReFS stores its file information in metadata.
One big difference in how NTFS and ReFS are different is in the way they update the metadata. NTFS performs like metadata updates, which means that the metadata is updated “in-place.” The metadata says your new folder is named “New Folder,” and then you rename it to “Downloaded Files.” When you make the change, the actual metadata itself is written over. When a power outage occurs at the time you’re updating a disk, the metadata can be partially or completely overwritten, causing data corruption (called a “torn write”). You may experience a BSOD when you try to restart, or you may find that your data is no longer accessible.
ReFS does not update the metadata in-place. Instead, it creates a new copy of the metadata, and only once the new copy of the metadata is intact and all the writes have taken place does the file update itself with the new metadata. There are further improvements to the way that ReFS handles writes to the metadata, but for the most part the other changes are performance improvements. This new way of updating metadata allows you to reliably and consistently recover from power outages without disk corruption.
“We perform significant testing where power is withdrawn from the system while the system is under extreme stress, and once the system is back up, all structures are examined for correctness. This testing is the ultimate measure of our success. We have achieved an unprecedented level of robustness in this test for Microsoft file systems. We believe this is industry-leading and fulfills our key design goals.”
- Surendra Verma, “Building the Next Generation File System for Windows 8”
Development Manager, Storage and File Systems
Microsoft

2) ReFS works with Storage Spaces to Better Detect and Repair Problems

Storage Spaces is a storage virtualization technology. Storage Spaces was not made to run exclusively with ReFS, but they do work great together. ReFS has improved functionality when used in conjunction with Storage Spaces. Likewise, some of the redundancy features that Storage Spaces offers are able to be leveraged because of the abilities of ReFS.
So ReFS can be used without Storage Spaces, and Storage Spaces can be used without ReFS, but when they are used together, both ReFS and Storage Spaces both work more effectively. Storage Spaces uses mirroring, spreading copies of data across multiple physical data drives. When Storage Spaces finds a problem with even one piece of corrupt data on a drive, the corrupt data will be removed from the drive, and will be replaced with a known good copy of the data from another one of the physical drives.
ReFS uses checksums on the metadata to ensure that the data has not been corrupted. When Storage Spaces finds mismatched data between two or more copies of the same file, it can rely on the built-in metadata checksums that are a feature of ReFS. Once the checksums are validated, the correct data is copied back to the other physical drives, and the corrupted data is removed.
Occasionally, an ReFS drive controlled by Storage Spaces will undergo routine maintenance called “scrubbing.” Scrubbing is a task that runs on each file in a Storage Space. Checksums are verified, and if there are any checksums that are found to be invalid, the corrupted data is replaced with known good data from a physical drive that has a valid checksum. Scrubbing is on by default, but can be customized and configured even on individual files.


1) ReFS Volumes can Stay Live even if they have Irreparable Corruption

With NTFS, even a small amount of data corruption can cause big problems. With ReFS you are much less likely to have problems. In a case where a system is not using Storage Spaces and mirroring, or if for some strange reason one part of the data across the whole mirror is corrupt, only the corrupt parts will be removed from the volume, and the volume itself will stay active, thanks to “salvage.”
Salvage can remove even a single file that is corrupt. Once the corrupt data is removed, the volume is brought back. This turns what is usually a server that is brought offline for time consuming disk checking utilities to find and repair the entries, to a volume which is repaired except for the corrupt data files and brought back online in under one second.

Conclusion

Just like NTFS, ReFS brings with it some major improvements which will become a normal part of our industry for the likely future. Specifically, ReFS brings improvements in the way that metadata is updated, and by using checksums to ensure that corrupt data is easily found and repaired.
ReFS is the most robust file system from Microsoft to date, with reliability built in to make the most of our time and reduce the total cost of ownership on Windows Servers.
Michael Simmons

Overview of the File Server Role in Windows Server 8 Failover Clustering

Introduction

The next version of Windows Server has been officially dubbed and the name comes as no surprise to IT pros who have used the last three versions: It’s Windows Server 2012. My next few articles will delve into some of its new and improved features, beginning this time with an overview of the file server role in failover clustering.
In operating systems prior to Windows Server 2012, highly available file services were provided by failover cluster Client Access Point (CAP) that clients could use to connect to SMB (Server Message Block) or Network File System (NFS) shares on physical disk resources. If you deployed a shared-nothing cluster, only one node in a cluster File Server group could be online. In the event of a failure or if the File Server group was moved to another cluster node, clients were disconnected and had to reconnect when the group became available on an online node in the cluster. 
In Windows Server 2012, the File Server Role has been expanded to include a new scenario where application data (specifically Hyper-V and SQL Server) is supported on highly available SMB shares in Windows Server 2012 Failover Clustering. This is called Scale-Out File Services and uses the following:
  • a new client access method using a new cluster resource type, called a Distributed Network Name (DNN)
  • Cluster Shared Volumes v2 (CSVv2)
  • SMB v3 improvements, which enables continuous availability and transparent failover. 
SMB v3 allows SMB connections to be distributed across all nodes in the cluster that have simultaneous access to all shares. This can make it possible to provide access with almost zero downtime.

Installing the General Use File Server Role

File servers in a cluster can be configured for general use (such as users storing files in shares) or to support application storage for Hyper-V and SQL. The General Use File Server in Windows Server 2012 is almost the same as it was in Windows Server 2008 R2. The only significant difference is that shares can be made continuously available with the help of the SMB 3.0 protocol.
The following steps show the installation options for installing the General User File Server role on a Windows Server 2012 failover cluster:
  1. Click on Configure Role in the Actions pane in Failover Cluster Manager.
  2. Click  Next on the Before You Begin page.
  3. On the Select Role page, select the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster, and click Next.

Figure 1
  1. On the File Server Type page, select File Server for general use and click Next. Note that when you select this option, you have support for SMB and NFS shares, and you can also use File Server Resource Manager, Distributed File System Replication and other File Services role services.

Figure 2
  1. On the Client Access Point page, enter the information for the Client Access Point (CAP) and click Next.
  2. On the Select Storage page, enter a storage location for the data and click Next.
  3. On the Confirmation page, read the Confirmation information and click Next.
  4. On the Summary page, you can click the View Report button if you want to see details of the configuration. Click Finish.
Now that the role is installed, you can create file shares on the failover cluster.
Perform the following steps to create the file shares:
  1. Click the File Server Role in the Failover Cluster Manager and in the Actions pane, click Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. The Select Profile page presents you with five options. For our purposes, you can choose either SMB Share - Basic or SMB Share - Advanced and click Next

Figure 3
  1. On the Share Location page, choose a Share Location and click Next.
  2. On the Share Name page, provide a Share Name and click Next.
  3. On the Other Settings page, there are a number of additional share settings from which you can choose. Notice that Enable Continuous Availability is checked by default; this is to take advantage of the new SMB v3 functionality (Transparent Failover). Another new feature in SMB v3 enables you to encrypt the SMB connection without requiring the overhead of IPsec. You can find out more about SMB v3 here. Click Next.

Figure 4
  1. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions). Click Next

Figure 5
  1. On the Confirmation page, review the information and click Create.
When the share is configured, it will appear in the Shares tab.

Figure 6
If you prefer the command line, you can also get information about the share by using the PowerShell cmdlet Get-SMBShare.
Another place you can find share information is in the File and Storage Services Management Interface in Server Manager.

Installing the Scale-Out File Server Role

The Scale-Out File Server role is new in Windows Server 2012. With the many new technologies in Windows Server 2012, you can provide continuously available file services for application data and, at the same time, respond to increased demands quickly by bringing more servers online. Scale-Out File Servers take advantage of new features included in Windows Server 2012 Failover Clustering. The key new features that are included in Windows Server 2012, which enable the Scale Out Server Role, include the following:
  • Distributed Network Name (DNN) – this is the name that client systems use to connect to cluster shared resources
  • Scale-Out File Server resource type
  • Cluster Shared Volumes Version 2 (CSVv2)
  • Scale-Out File Server Role
Note that Failover Clustering is required for Scale-Out File Servers and the clusters of Scale Out File Servers are limited to four servers. Also, the File Server role service must be enabled on all nodes in the cluster. 
SMB v3, which is installed and enabled by default in Windows Server 2012, provides several features that support continuous availability of file shares to end users and applications. It’s important to point out that Scale-Out File Servers support storing application data on file shares and that SMB v3 will provide continuous availability for those shares for the two supported applications, which are Hyper-V and SQL Server. Specific capabilities that exist as part of the new SMBv2.2 functionality include:
  • SMB2 Transparent Failover – this allows all members of the cluster to host the shared resources and makes it possible for clients to connect to other members of the cluster transparently, without any perceptible disconnection on the client side.
  • MB2 Multichannel – this enables the use of multiple network connections to connect to cluster hosted resources and enables the cluster members to be highly available by supporting out of the box NIC teaming and bandwidth aggregation.
  • SMB2 Direct (RDMA) – this makes it possible to take advantage of the full speed of the NICs without impacting the processors on the cluster members; it also makes it possible to obtain full wire speed and network access speeds comparable to direct attached storage.
For more information about the Scale-Out File Server role, check out this link.
Perform the following steps to create a Scale-Out File Server Role:
  1. Click Configure Role in the Actions pane in Failover Cluster Manager.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster and click Next.

Figure 7
  1. On the File Server Type page, select File Server for scale-out application data and click Next. Note that when you select this role, there is support only for SMB v3 shares; that is, there is no support for NFS shares. In addition, with this configuration you will not be able to use some file server role services, such as FSRM and DFS replication.

Figure 8
  1. On the Client Access Point page, enter a valid NetBIOS name for the Client Access Point and click Next.
  2. On the Confirmation page, review the information and click Next.
  3. When the wizard completes, you can click the View Report button to see details of the configuration. Click Finish.
Now that the role is installed, you’re ready to create file shares for applications where you can place the application data.
Perform the following steps to create shared folders:
  1. Click the File Server Role in the Failover Cluster Manager, and in the Actions pane, click on Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. On the Select Profile page of the New Share Wizard, choose SMB Share - Server Application for the profile and click Next.

Figure 9
  1. On the Share Location page, you should see only Cluster Shared Volumes.  Select a volume where you want to place the share and click Next.

Figure 10
  1. On the Share Name page, enter a Share Name and click Next.
  2. On the Other settings page, note that Enable continuous availability is selected by default. Click Next.
  3. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions) as needed. Click Next.
  4. Review the information on the Confirmation screen and click Create.
The Shares tab reflects all the shares that are configured on the CSV volumes.

Figure 11
The Distributed Network Name resource, which is part of the Scale-Out File Server role, has no dependencies on IP addresses; that means you don’t have to configure anything in advance for this to work. The reason for this is that the resource registers the node IP addresses for each node in the cluster in DNS. These IP addresses can be static IP addresses or they can be managed by DHCP. The IP address of each of the nodes in the cluster is recorded in DNS and is mapped to the Distributed Network Name. Clients then receive up to six addresses from the DNS server and DNS round robin is used to distribute the load.

Summary

In this article, we took a quick look at some of the new file server role capabilities included in Windows Server 2012. The traditional file server role continues with Windows Server 2012, but includes some nice new benefits, thanks to the new SMB v3 protocol, which provides for continuous availability and near zero downtime for file resources being hosted by the cluster. A new file services role, the Scale-Out File Server role, enables you to store application data for Hyper-V and SQL server, and is optimized for these applications that require continuous connectivity to these files over the network. Several improvements included in the SMB v3 protocol make it possible to host these files on a file server cluster and enable performance at wire speed and very close to the storage performance you can get with direct attached storage.

Author: Deb Shinder