Search This Blog

Friday, June 15, 2012

Đồng bộ hóa dữ liệu trên 2 Database Server dùng SQL Server 2008


Khi bạn sở hữu một website với cơ sỡ dữ liệu đồ sộ và có lượng truy cập cao thì vấn đề an toàn dữ liệu và tính sẵn sàng cao của dữ liệu là hết sức cần thiết. Thông thường các website này sẽ chạy trên nhiều Database đặt ở các server khác nhau nhằm đáp ứng tính an toàn cho dữ liệu, đồng thời giảm tải lên 1 database server khi website có khi lượng truy cập quá lớn.
Trong bài viết này, mình sẽ trình bày cách triển khai cơ sỡ dữ liệu trên 2 Database server với khả năng đồng bộ dữ liệu với nhau, đảm bảo tính nhất quán của dữ liệu website.
Chuẩn bị:
  • 2 server để chứa database.
  • Trên 2 server có cài sẵn SQL Server 2008. Mình khuyến khích dùng SQL Server 2008 Enterprise chạy trên Windows Server 2008.
  • Cài thêm công cụ SQL Server Management để thao tác.
Tiến hành:
Khi đã cài SQL Server xong, bạn đảm bảo các dịch vụ như hình bên dưới đã start thành công!
  • SQL Server
  • SQL Server Agent
  • SQL Server Browser

Trong Protocols for MSSQLSQLSERVER đảm bảo đã Enable giao thức TCP/IP

Tiến hành dùng công cụ SQL Management để đăng nhập vào server 1 và server 2. Trong bài lab này mình sẽ sử dụng 2 server với tên là kenhgiaiphap01 và kenhgiaiphap02.

Sau khi đăng nhập xong, ở Server kenhgiaiphap01 tạo database là test1 và ở serverkenhgiaiphap02 tạo database là test2. Đây sẽ là cơ sỡ dữ liệu của website, 2 database này sẽ có dữ liệu hoàn toàn giống sau khi đã đồng bộ hóa.
Yêu cầu database test1 của bạn cần được import dữ liệuh trước (database test2 ko cần).

Sau đó, ở server kenhgiaiphap01 bạn bung Replication, bấm phải chuột vào Publication và chọn New Publication.

Cửa sổ Welcome hiện ra, ta chọn Next

Chọn database mà ta muốn đồng bộ hóa với server 2. Ở đây ta chọn test1
Chọn chế độ Merge Puplication
Chú ý: Nếu bạn chọn chế độ Transactional puplication thì dữ liệu sẽ được đồng bộ theo 1 chiều, tức là server 1 cập nhật dữ liệu thì server 2 cũng sẽ có dữ liệu . Tuy nhiên ngược lại thì không được. Còn chế độ Merge Puplication sẽ đồng bộ dữ liệu theo cả 2 chiều.

Do nhiều server có thể chạy các bản SQL Server khác nhau, nên ở đây bạn sẽ được yêu cầu chọn phiên bản. Mình sẽ chọn SQL Server 2008.

Chọn những thành phần trong Database mà bạn muốn nó được đồng bộ hóa.
Chú ý: Table của Database mà bạn muốn đồng bộ hóa cần có khóa chính.

Tiếp tục họn Next

Tiếp tục họn Next

Cho phép tạo Snapshot ngay lập tức và sau đó chọn Next

Chọn Security Setting

Nhập lại tài khoản đăng nhập SQL Server của Server 1. Ở đây, mình khuyến  khích bạn dùng chế độ đăng nhập Windows Account

Ok -> Next

Nhập tên hiển thị. Ở đây mình nhập là Test Replication

Sau đó nhấn Next, nếu không có gì sai thì kết quả sẽ như hình dưới đây.

Khi đã tạo một Publication thành công, ta tiếp tục click phải chuột lên nó và chọn New Subscriptions

Ở màng Welcome chọn Next

Chọn Cơ sở dữ liệu mà Server 2 cần lấy để đồng bộ hóa.

Nhấn Next

Ở đây, ta nhấn vào ADD SQL Server Subcriber và add vào server thứ 2 (kenhgiaiphap02) và chọn cơ sỡ dữ liệu test 2 trên server này.

Nhấn Next và nhập thông tin đăng nhập của Database server 2 (kenghaiphap02)

Nhấn next. Trong Agent Schedule ta chọn Run Continously

Nhấn next.

Nhấn Next

Kiểm tra lại thông tin và nhấn Finish

Nếu setup thành công, thì kế quả sẽ như hình dưới đây.

Chờ một chút để cơ sở dữ liệu từ test1 đồng bộ sang test2

Kiểm tra kết quả:
Trên Database Test1 tiến hành nhập một Record mới.

Mở Database test2 lên và thấy dữ liệu đã được cập nhật y như bên database test1. Và ngược lại nếu có sử thay đổi trên database test2 thì database test1 cũng sẽ được cập nhật.

Chúc bạn thành công! Smile

Friday, June 8, 2012

Công cụ quản trị Domino 8.5.3

Domino Administration Tool
Domino Administration, Domino Admin

I. Kết nối Lotus Notes/ Domino Admins tới Domino Server
Sau khi cài đặt xong Notes / Domino Amin bạn sẽ có 2 shortcut như hình dưới
Khởi chạy Domino Administrator, cửa sổ Configuration hiện ra, nhấp chuột vào Next

Cửa sổ tiếp theo, bạn gõ tên và địa chỉ của Domino Server, tích vào lựa chọn "I want to connect to ad Domino server, nếu bạn là người quản trị Domino,  sau đó nhấp chuột vào Next
Cửa sổ tiếp theo nhấn Next
Nhấn Next tiếp
Nhập vào mật khẩu của Admin.id nhấp chuột vào Log In
Cửa sổ tiếp theo nhấn Finish để hoàn tất

II. Giới thiệu công cụ quản trị Domino ( Domino Admin)

1. Giao diện người dùng

Danh mục
Chức năng
Menu
Danh mục các chức năng để điều khiển
Windows Tabs
Nếu bạn chỉ làm với Domino Administrator, bạn chỉ có một Windows tab
Functios Tabs
Các tác vụ quản trị được chia ra thành các tab: People & Groups, Files, Server, Messaging, Replication, Configuration
Server Messaging có các tab con
Directory Selector
Chọn ra directory để đọc từ hoặc tập tin để xem. Xuất hiện trên các tab chức năng khác nhau.
(Đoạn này hơi khó hiểu, nên dịch ra cũng khó, tôi sẽ giải thích kỹ với các bạn ở phần sau)
Context Pane
Thể hiện danh sách các đối tượng có sẵn dưới function tab, ví dụ, bạn chọn Domino Directory khi bạn làm việc trong tab People & Views.
Các điều khiển sẽ xuất hiện trong Result Pane
Results Pane
Thể hiện kết quả khi lựa chọn các đối tượng trong Context Pane
Chú ý: ấn phím F9 để nạp lại khung nhìn, hoặc có thể ấn Shift+Ctrl+F9 để reset lại toàn bộ khung nhìn nếu cần thiết
Tools Pane
Một danh sách các công cụ mà bạn có thể thực hiện các tác vụ quản trị riêng biệt. Nếu công cụ này bị mờ, có nghĩa là chức năng này ko có sẵn hoặc có sẵn cho những lựa chọn mục nào đó ở khung Results Pane.
Để sử dụng công cụ này, bạn chọn item trong khung Results Pane và lựa chọn các chức năng trong Tool Pane




2. Thứ tự để sử dụng Domino Administrator


--->1. Chọn Server mà bạn muốn quản trị
--->2. Chọn tab chức năng mà bạn muốn làm
--->3. Chọn đối tượng nếu bạn muốn thao tác trong Context Pane.
--->4. Chọn Item trong khung Result Pane
--->5. Nhấp chuột vào Tool và thực hiện các thao tác ( bạn có thể nhấp chuột phải thay vì click vào Tool)

Bài tiếp theo: Hướng dẫn cài đặt và cấu hình tài liệu (Configuration Settings Documents)

Thursday, June 7, 2012

Steps to upgrade a 32-bit Domino server to 64-bit server on Windows platform


Problem

What steps should be taken to upgrade a 32-bit Domino server to 64-bit server on Windows platform.
We can consider two scenarios in upgrading the Domino server from 32-bit to 64-bit Domino.
I. On the 64-bit Windows you are running 32-bit Domino, How to upgrade it to 64-bit Domino?
II. On the 32-bit Windows you are running 32-bit Domino, How to upgrade it to 64-bit Domino on 64-bit Windows on a different hardware?

Resolving the problem

You can follow the below steps for the above scenarios to upgrade the Domino server to 64-bit.
NOTES:
Before you upgrade the Domino server it is always recommended to back up your ID files (server.id, admin.id and cert.id), notes.ini file of the server along with the back up of your entire data directory.

Before starting your Domino upgrade process, it is a best practice to make sure all 3rd party and Lotus Companion products that require a parallel 64-bit Domino upgrade are available and upgraded at same time.


Scenario 1: How to upgrade 32-bit Domino to 64-bit Domino running on 64-bit Windows



1. Write down your current Domino Program Directory and current Domino Data Directory. You will need this information when you run the 64-bit Installer.
2. Shut down the Domino server.
3. Run the installer for the Domino 64-bit, Once you run it will first uninstall the existing 32-bit Domino and it shows the below window for sometime to complete the uninstall process of 32-bit Domino.


Note: It will only uninstall the program directory files and it will keep the data directory files and also the notes.ini file in the program directory as it is.

Click next to continue.
4. It will now prompt with next screen with default path for the program directory as C:\Program Files\IBM\Lotus\Domino as shown below


Here you need to change the path to your earlier 32-bit server path for the PROGRAM and DATA directory path.
5. Click next to continue and follow on screen display to finish the installation of Domino 64-bit.
6. Final window it shows as below and choose "Finish" to complete the installation process.


7. Before restarting the Domino server, run the offline maintenance on the following system databases from command prompt.

Fixup:
x:\Lotus\Domino\nfixup names.nsf -F
x:\Lotus\Domino\nfixup admin4.nsf -F

If you are using "transaction logging", make sure you use the switch -J, as below:
x:\Lotus\ Domino\nfixup names.nsf -J
x:\Lotus\Domino\nfixup admin4.nsf -J

Compact:
x:\Lotus\Domino\ncompact names.nsf -c
x:\Lotus\Domino\ncompact admin4.nsf -c

Updall:
Updall should be run on all databases. When the code changes from 32-bit to 64-bit, all existing views and Full Text indexes will get rebuilt when you first access a database after you bring up the Domino server for the first time after the upgrade. This can take a very long time, so it's advisable to have updall do this work for you while the server is already scheduled to be down for the upgrade. By running updall while the server is still down, it will rebuild those views so when they are accessed that first time, the rebuild is not happening at that point in time, it's already done.

You can use indirect files (.IND) to run multiple updall processes concurrently to complete in a more timely manner. See the following wiki article for more information: Using indirect files to run maintenance tasks

8. Start the Domino 64-bit server by double clicking icon on the desktop. To start the server you can select as a service or as an application.
9. If the 32-bit Domino is upgraded from previous version 8.0.x to 8.5.x and if the server is an Administration server then at the server starts up it will prompt you with the message.

    "Do you want to upgrade the design of your address book?  This replaces the standard forms and views with the ones from the template.(Yes/No)."

10. Type Yes or Y and enter it will upgrade the design of the names.nsf with the latest 85x template.

This completes the upgrade of your Domino server to 64-bit.

Scenario 2: How to upgrade 32-bit Domino running on 32-bit Windows to new hardware running 64-bit Domino on 64-bit Windows



1. On the new hardware which is running 64-bit Windows create Program directory path and data directory path same as your earlier 32-bit Domino.

    For example: If the programs and data directory were in the path C:\Lotus\Domino and D:\Lotus\Domino\Data On the new hardware create the same folder structure as above.

2. Copy notes.ini file from 32-bit Domino program directory to new hardware Domino program directory which you created in the above step. 3. Copy the entire Data directory from 32-bit Domino to the new hardware Data directory which you created in the above step 1.
4. Once copied notes.ini and Data directory to new hardware, Run the Lotus Domino 8.5.x 64-bit installer on the new hardware and follow the on screen display and select the above created Domino and Datadirectory path to install the Domino server.
5. Click next to continue and follow on screen display to finish the installation of Domino 64-bit.



6. Before restarting the Domino server, run the offline maintenance on the following system databases from command prompt.

Fixup:
x:\Lotus\Domino\nfixup names.nsf -F
x:\Lotus\Domino\nfixup admin4.nsf -F

If you are using "transaction logging", make sure you use the switch -J, as below:
x:\Lotus\ Domino\nfixup names.nsf -J
x:\Lotus\Domino\nfixup admin4.nsf -J

Compact:
x:\Lotus\Domino\ncompact names.nsf -c
x:\Lotus\Domino\ncompact admin4.nsf -c


Updall:
Updall should be run on all databases. When the code changes from 32-bit to 64-bit, all existing views and Full Text indexes will get rebuilt when you first access a database after you bring up the Domino server for the first time after the upgrade. This can take a very long time, so it's advisable to have updall do this work for you while the server is already scheduled to be down for the upgrade. By running updall while the server is still down, it will rebuild those views so when they are accessed that first time, the rebuild is not happening at that point in time, it's already done.

You can use indirect files (.IND) to run multiple updall processes concurrently to complete in a more timely manner. See the following wiki article for more information: Using indirect files to run maintenance tasks

7. Start the Domino 64-bit server by double clicking icon on the desktop. To start the server you can select as a service or as an application.
8. If the 32-bit Domino is upgraded from previous version 8.0.x to 8.5.x and if the server is an Administration server then at the server starts up it will prompt you with the message.

    "Do you want to upgrade the design of your address book?  This replaces the standard forms and views with the ones from the template.(Yes/No)."

9. Type Yes or Y and enter it will upgrade the design of the names.nsf with the latest 85x template.

This completes the migration of your Domino server to 64-bit.
IBM 

Wednesday, May 23, 2012

Synchronize IIS

This quick guide will guide you through the process of using the Web Deployment Tool to synchronize a Web site on an IIS source computer to an IIS destination computer. You can do this by "pushing" data to a remote destination, or by "pulling" data from a remote source. This guide will show both methods, as well as an option to use a package file so that you do not have to install the Web Deployment Agent Service (MsDepSvc, or "remote agent service".)
What are the ways you can synchronize using the Web Deployment Tool?
  • Push (synchronize from a local source to a remote destination)
  • Pull (synchronize from a remote source to a local destination)
  • Independent Sync (initiate a synchronization from a computer where both destination and source are remote)
  • Manual Local Sync (create a package file of the source and copy it to the destination, then run it locally)

Prerequisites

This guide requires the following prerequisites:
  • .NET Framework 2.0 SP1 or greater
  • Web Deployment Tool 1.1
Note: If you have not already installed the Web Deployment Tool, see Installing and Configuring Web Deploy.

Part 1 - View your site's dependencies

1. Get the dependencies of the Web site by running the following command:
msdeploy -verb:getDependencies -source:apphostconfig="Default Web Site"
2. Review the output of the dependencies and look for any script maps or installed components that are in use by the site. For example, if Windows Authentication is in use by the Web site, you will see <dependency name="WindowsAuthentication" />.
3. If your site is inheriting any script maps, these will not be listed in the dependencies and you should also review the script maps for your site manually.
4. Compile a list of the components needed on the destination.
For detailed steps on analyzing the output of getDependencies, see Viewing Web Site Dependencies.

Part 2 - Configure the target (destination)

1. Review the list of dependencies and install them on the destination server.
For example, let’s assume you had the following in use for your Web site:
• ASP.NET
• Windows Authentication
• Anonymous Authentication
Based on analyzing your dependencies, you would install those components on the destination server before performing the synchronization.

Part 3 – Synchronize your site to the target

1. Always make a backup of the destination and source servers. Even if you are just testing, it allows you to easily restore the state of your server. Run the following command to backup an IIS 7 or above server:
%windir%\system32\inetsrv\appcmd add backup "PreMsDeploy"
2. Install the remote agent service on the source or the destination depending on if you want to "pull" the data from a remote source or "push" the data to a remote destination.
3. Start the service on the computer.
net start msdepsvc 
4. Run the following command to validate what would happen if the synchronization were run. The -whatif flag will not show every change; it will just show an optimistic view of what might change if everything succeeds (for example, it won't catch errors where you can't write to the destination.)
Pushing to remote destination, running on source computer (the computerName argument identifies the remote destination computer).
msdeploy -verb:sync -source:apphostconfig="Default Web Site" -dest:apphostconfig="Default Web Site",computername=Server1 -whatif > msdeploysync.log
Pulling from a remote source, running on destination machine (the computerName argument identifies the remote source computer).msdeploy -verb:sync -source:apphostconfig="Default Web Site",computername=Server1 -dest:apphostconfig="Default Web Site" -whatif > msdeploysync.log
5. After verifying the output, run the same command again without the -whatif flag:
Pushing to remote destination, running on source machine
msdeploy -verb:sync -source:apphostconfig="Default Web Site" -dest:apphostconfig="Default Web Site",computername=Server1 > msdeploysync.log
Pulling from a remote source, running on destination machinemsdeploy -verb:sync -source:apphostconfig="Default Web Site",computername=Server1 -dest:apphostconfig="Default Web Site" > msdeploysync.log

{Optional - Synchronize your site to the target by using a package file}

If you don't wish to use the remote service, you can use a package (compressed file) instead.
1. Run the following command on the source server to create a package of the Web site for synchronization:
msdeploy -verb:sync  -source:apphostconfig="Default Web Site" -dest:package=c:\site1.zip
2. Copy the package file to the destination server.
3. Run the following command on the destination server to validate what would happen if the synchronization were run:
msdeploy -verb:sync -source:package=c:\site1.zip -dest:apphostconfig="Default Web Site" -whatif > msdeploysync.log
4. After verifying the output, run the same command again without the -whatif flag:
msdeploy -verb:sync -source:package=c:\site1.zip -dest:apphostconfig="Default Web Site" > msdeploysync.log

You are now done synchronizing your site. To verify, test browsing to the Web site on the destination server. For troubleshooting help, see Troubleshooting Web Deploy.

Summary

You have now synchronized a web site from a source IIS server to a destination IIS server, including viewing the dependencies, configuring the destination IIS server and performing the synchronization.
faith_a

Monday, May 21, 2012

Cài đặt Lotus Notes 8.5.3 ( Notes Client, Domino Administration)

(Setup Lotus Notes Client and Domino Administrator Client 8.5.3)
1. Yêu cầu phần cứng
Khoảng trống đĩa: 1GB hoặc nhiều hơn
Bộ nhớ RAM: 512 MB / Windows XP - khuyên dùng 1GB,  1GB / Windows 7 - khuyên dùng 1.5 GB hoặc nhiều hơn
Bộ vi xử lý: Intel Pentium 4, 1.2 GHz or higher and compatibles, or equivalents
2. Yêu cầu hệ điều hành
Từ Windows XP trở lên
3. Yêu cầu trình duyệt
IE 6 full upate
Firefox 3.5
4. Hướng dẫn cài đặt Notes Client và Administrator Client
Duyệt đến thư mục chứa bộ cài Lotus Notes 8.5.3 chạy file setup.exe
Cửa sổ Lotus Notes 8.5.3 hiện ra, nhấp chuột vào Next

Đọc thỏa thuận bản quyền, sau đó chọn I accept the tearms in the license agreement, tiếp nhấn Next


Tiếp, chọn đường dẫn cài đặt Notes Client, sau đó nhấn Next


Chọn các thành phần cần cài đặt, ở đây tôi chọn Notes Client và Domino Administrator, sau đó nhấn Next


Nhấn Install để tiến hành cài đặt


Nhấp chuột vào Finish để hoàn tất quá trình cài đặt


4 Reasons ReFS (Resilient File System) is Better Than NTFS

Overview

Resilient File System (ReFS) is a new file system introduced in Windows Server 2012. Initially, it is being targeted for implementation as a file system that is primarily used for file servers. However, starting as the file system for a file server is just the beginning. Like its predecessor, NTFS, ReFS will begin as a file server system, then become a mainstream file system. Before long, we will all be using ReFS on our boot partitions.
So why would you want to change file systems? If NTFS is working, why should anybody even consider switching to ReFS? ReFS is better and faster in many ways than NTFS, but in one way more than all others: its resiliency.
Resilient File System will likely replace NTFS completely within the next versions of Windows, and here are some reasons why you are going to really love the new file system.

4) ReFS Supports Long File Names and File Path. Really Long.

Capacity is just one of the ways that ReFS is making changes. There will no longer be a limitation of 255 characters for a long file name. A file name in ReFS can be up to 32,768 unicode characters long! The limitation on full path size has also been updated from 255 characters for the total path size to 32K (32,768).
The legacy 8.3 naming convention is no longer stored as part of the file data. There is only one file name, and it can be a very long name.
Other changes have increased the capacity as well, though it is unlikely that the maximum size of a single volume will impact a real person. NTFS already had a maximum volume size of 16 Exabytes. The ReFS format allows a maximum volume size of 262,144 Exabytes.

3) ReFS is Much Better at Handling Power Outages

NTFS stores all of its file information in metadata. The filename is stored in the metadata. The location on the hard disk is stored in the metadata. When you rename a file, you’re changing the metadata. Likewise, ReFS stores its file information in metadata.
One big difference in how NTFS and ReFS are different is in the way they update the metadata. NTFS performs like metadata updates, which means that the metadata is updated “in-place.” The metadata says your new folder is named “New Folder,” and then you rename it to “Downloaded Files.” When you make the change, the actual metadata itself is written over. When a power outage occurs at the time you’re updating a disk, the metadata can be partially or completely overwritten, causing data corruption (called a “torn write”). You may experience a BSOD when you try to restart, or you may find that your data is no longer accessible.
ReFS does not update the metadata in-place. Instead, it creates a new copy of the metadata, and only once the new copy of the metadata is intact and all the writes have taken place does the file update itself with the new metadata. There are further improvements to the way that ReFS handles writes to the metadata, but for the most part the other changes are performance improvements. This new way of updating metadata allows you to reliably and consistently recover from power outages without disk corruption.
“We perform significant testing where power is withdrawn from the system while the system is under extreme stress, and once the system is back up, all structures are examined for correctness. This testing is the ultimate measure of our success. We have achieved an unprecedented level of robustness in this test for Microsoft file systems. We believe this is industry-leading and fulfills our key design goals.”
- Surendra Verma, “Building the Next Generation File System for Windows 8”
Development Manager, Storage and File Systems
Microsoft

2) ReFS works with Storage Spaces to Better Detect and Repair Problems

Storage Spaces is a storage virtualization technology. Storage Spaces was not made to run exclusively with ReFS, but they do work great together. ReFS has improved functionality when used in conjunction with Storage Spaces. Likewise, some of the redundancy features that Storage Spaces offers are able to be leveraged because of the abilities of ReFS.
So ReFS can be used without Storage Spaces, and Storage Spaces can be used without ReFS, but when they are used together, both ReFS and Storage Spaces both work more effectively. Storage Spaces uses mirroring, spreading copies of data across multiple physical data drives. When Storage Spaces finds a problem with even one piece of corrupt data on a drive, the corrupt data will be removed from the drive, and will be replaced with a known good copy of the data from another one of the physical drives.
ReFS uses checksums on the metadata to ensure that the data has not been corrupted. When Storage Spaces finds mismatched data between two or more copies of the same file, it can rely on the built-in metadata checksums that are a feature of ReFS. Once the checksums are validated, the correct data is copied back to the other physical drives, and the corrupted data is removed.
Occasionally, an ReFS drive controlled by Storage Spaces will undergo routine maintenance called “scrubbing.” Scrubbing is a task that runs on each file in a Storage Space. Checksums are verified, and if there are any checksums that are found to be invalid, the corrupted data is replaced with known good data from a physical drive that has a valid checksum. Scrubbing is on by default, but can be customized and configured even on individual files.


1) ReFS Volumes can Stay Live even if they have Irreparable Corruption

With NTFS, even a small amount of data corruption can cause big problems. With ReFS you are much less likely to have problems. In a case where a system is not using Storage Spaces and mirroring, or if for some strange reason one part of the data across the whole mirror is corrupt, only the corrupt parts will be removed from the volume, and the volume itself will stay active, thanks to “salvage.”
Salvage can remove even a single file that is corrupt. Once the corrupt data is removed, the volume is brought back. This turns what is usually a server that is brought offline for time consuming disk checking utilities to find and repair the entries, to a volume which is repaired except for the corrupt data files and brought back online in under one second.

Conclusion

Just like NTFS, ReFS brings with it some major improvements which will become a normal part of our industry for the likely future. Specifically, ReFS brings improvements in the way that metadata is updated, and by using checksums to ensure that corrupt data is easily found and repaired.
ReFS is the most robust file system from Microsoft to date, with reliability built in to make the most of our time and reduce the total cost of ownership on Windows Servers.
Michael Simmons

Overview of the File Server Role in Windows Server 8 Failover Clustering

Introduction

The next version of Windows Server has been officially dubbed and the name comes as no surprise to IT pros who have used the last three versions: It’s Windows Server 2012. My next few articles will delve into some of its new and improved features, beginning this time with an overview of the file server role in failover clustering.
In operating systems prior to Windows Server 2012, highly available file services were provided by failover cluster Client Access Point (CAP) that clients could use to connect to SMB (Server Message Block) or Network File System (NFS) shares on physical disk resources. If you deployed a shared-nothing cluster, only one node in a cluster File Server group could be online. In the event of a failure or if the File Server group was moved to another cluster node, clients were disconnected and had to reconnect when the group became available on an online node in the cluster. 
In Windows Server 2012, the File Server Role has been expanded to include a new scenario where application data (specifically Hyper-V and SQL Server) is supported on highly available SMB shares in Windows Server 2012 Failover Clustering. This is called Scale-Out File Services and uses the following:
  • a new client access method using a new cluster resource type, called a Distributed Network Name (DNN)
  • Cluster Shared Volumes v2 (CSVv2)
  • SMB v3 improvements, which enables continuous availability and transparent failover. 
SMB v3 allows SMB connections to be distributed across all nodes in the cluster that have simultaneous access to all shares. This can make it possible to provide access with almost zero downtime.

Installing the General Use File Server Role

File servers in a cluster can be configured for general use (such as users storing files in shares) or to support application storage for Hyper-V and SQL. The General Use File Server in Windows Server 2012 is almost the same as it was in Windows Server 2008 R2. The only significant difference is that shares can be made continuously available with the help of the SMB 3.0 protocol.
The following steps show the installation options for installing the General User File Server role on a Windows Server 2012 failover cluster:
  1. Click on Configure Role in the Actions pane in Failover Cluster Manager.
  2. Click  Next on the Before You Begin page.
  3. On the Select Role page, select the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster, and click Next.

Figure 1
  1. On the File Server Type page, select File Server for general use and click Next. Note that when you select this option, you have support for SMB and NFS shares, and you can also use File Server Resource Manager, Distributed File System Replication and other File Services role services.

Figure 2
  1. On the Client Access Point page, enter the information for the Client Access Point (CAP) and click Next.
  2. On the Select Storage page, enter a storage location for the data and click Next.
  3. On the Confirmation page, read the Confirmation information and click Next.
  4. On the Summary page, you can click the View Report button if you want to see details of the configuration. Click Finish.
Now that the role is installed, you can create file shares on the failover cluster.
Perform the following steps to create the file shares:
  1. Click the File Server Role in the Failover Cluster Manager and in the Actions pane, click Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. The Select Profile page presents you with five options. For our purposes, you can choose either SMB Share - Basic or SMB Share - Advanced and click Next

Figure 3
  1. On the Share Location page, choose a Share Location and click Next.
  2. On the Share Name page, provide a Share Name and click Next.
  3. On the Other Settings page, there are a number of additional share settings from which you can choose. Notice that Enable Continuous Availability is checked by default; this is to take advantage of the new SMB v3 functionality (Transparent Failover). Another new feature in SMB v3 enables you to encrypt the SMB connection without requiring the overhead of IPsec. You can find out more about SMB v3 here. Click Next.

Figure 4
  1. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions). Click Next

Figure 5
  1. On the Confirmation page, review the information and click Create.
When the share is configured, it will appear in the Shares tab.

Figure 6
If you prefer the command line, you can also get information about the share by using the PowerShell cmdlet Get-SMBShare.
Another place you can find share information is in the File and Storage Services Management Interface in Server Manager.

Installing the Scale-Out File Server Role

The Scale-Out File Server role is new in Windows Server 2012. With the many new technologies in Windows Server 2012, you can provide continuously available file services for application data and, at the same time, respond to increased demands quickly by bringing more servers online. Scale-Out File Servers take advantage of new features included in Windows Server 2012 Failover Clustering. The key new features that are included in Windows Server 2012, which enable the Scale Out Server Role, include the following:
  • Distributed Network Name (DNN) – this is the name that client systems use to connect to cluster shared resources
  • Scale-Out File Server resource type
  • Cluster Shared Volumes Version 2 (CSVv2)
  • Scale-Out File Server Role
Note that Failover Clustering is required for Scale-Out File Servers and the clusters of Scale Out File Servers are limited to four servers. Also, the File Server role service must be enabled on all nodes in the cluster. 
SMB v3, which is installed and enabled by default in Windows Server 2012, provides several features that support continuous availability of file shares to end users and applications. It’s important to point out that Scale-Out File Servers support storing application data on file shares and that SMB v3 will provide continuous availability for those shares for the two supported applications, which are Hyper-V and SQL Server. Specific capabilities that exist as part of the new SMBv2.2 functionality include:
  • SMB2 Transparent Failover – this allows all members of the cluster to host the shared resources and makes it possible for clients to connect to other members of the cluster transparently, without any perceptible disconnection on the client side.
  • MB2 Multichannel – this enables the use of multiple network connections to connect to cluster hosted resources and enables the cluster members to be highly available by supporting out of the box NIC teaming and bandwidth aggregation.
  • SMB2 Direct (RDMA) – this makes it possible to take advantage of the full speed of the NICs without impacting the processors on the cluster members; it also makes it possible to obtain full wire speed and network access speeds comparable to direct attached storage.
For more information about the Scale-Out File Server role, check out this link.
Perform the following steps to create a Scale-Out File Server Role:
  1. Click Configure Role in the Actions pane in Failover Cluster Manager.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster and click Next.

Figure 7
  1. On the File Server Type page, select File Server for scale-out application data and click Next. Note that when you select this role, there is support only for SMB v3 shares; that is, there is no support for NFS shares. In addition, with this configuration you will not be able to use some file server role services, such as FSRM and DFS replication.

Figure 8
  1. On the Client Access Point page, enter a valid NetBIOS name for the Client Access Point and click Next.
  2. On the Confirmation page, review the information and click Next.
  3. When the wizard completes, you can click the View Report button to see details of the configuration. Click Finish.
Now that the role is installed, you’re ready to create file shares for applications where you can place the application data.
Perform the following steps to create shared folders:
  1. Click the File Server Role in the Failover Cluster Manager, and in the Actions pane, click on Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. On the Select Profile page of the New Share Wizard, choose SMB Share - Server Application for the profile and click Next.

Figure 9
  1. On the Share Location page, you should see only Cluster Shared Volumes.  Select a volume where you want to place the share and click Next.

Figure 10
  1. On the Share Name page, enter a Share Name and click Next.
  2. On the Other settings page, note that Enable continuous availability is selected by default. Click Next.
  3. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions) as needed. Click Next.
  4. Review the information on the Confirmation screen and click Create.
The Shares tab reflects all the shares that are configured on the CSV volumes.

Figure 11
The Distributed Network Name resource, which is part of the Scale-Out File Server role, has no dependencies on IP addresses; that means you don’t have to configure anything in advance for this to work. The reason for this is that the resource registers the node IP addresses for each node in the cluster in DNS. These IP addresses can be static IP addresses or they can be managed by DHCP. The IP address of each of the nodes in the cluster is recorded in DNS and is mapped to the Distributed Network Name. Clients then receive up to six addresses from the DNS server and DNS round robin is used to distribute the load.

Summary

In this article, we took a quick look at some of the new file server role capabilities included in Windows Server 2012. The traditional file server role continues with Windows Server 2012, but includes some nice new benefits, thanks to the new SMB v3 protocol, which provides for continuous availability and near zero downtime for file resources being hosted by the cluster. A new file services role, the Scale-Out File Server role, enables you to store application data for Hyper-V and SQL server, and is optimized for these applications that require continuous connectivity to these files over the network. Several improvements included in the SMB v3 protocol make it possible to host these files on a file server cluster and enable performance at wire speed and very close to the storage performance you can get with direct attached storage.

Author: Deb Shinder