Search This Blog

Monday, May 21, 2012

Cài đặt Lotus Notes 8.5.3 ( Notes Client, Domino Administration)

(Setup Lotus Notes Client and Domino Administrator Client 8.5.3)
1. Yêu cầu phần cứng
Khoảng trống đĩa: 1GB hoặc nhiều hơn
Bộ nhớ RAM: 512 MB / Windows XP - khuyên dùng 1GB,  1GB / Windows 7 - khuyên dùng 1.5 GB hoặc nhiều hơn
Bộ vi xử lý: Intel Pentium 4, 1.2 GHz or higher and compatibles, or equivalents
2. Yêu cầu hệ điều hành
Từ Windows XP trở lên
3. Yêu cầu trình duyệt
IE 6 full upate
Firefox 3.5
4. Hướng dẫn cài đặt Notes Client và Administrator Client
Duyệt đến thư mục chứa bộ cài Lotus Notes 8.5.3 chạy file setup.exe
Cửa sổ Lotus Notes 8.5.3 hiện ra, nhấp chuột vào Next

Đọc thỏa thuận bản quyền, sau đó chọn I accept the tearms in the license agreement, tiếp nhấn Next


Tiếp, chọn đường dẫn cài đặt Notes Client, sau đó nhấn Next


Chọn các thành phần cần cài đặt, ở đây tôi chọn Notes Client và Domino Administrator, sau đó nhấn Next


Nhấn Install để tiến hành cài đặt


Nhấp chuột vào Finish để hoàn tất quá trình cài đặt


4 Reasons ReFS (Resilient File System) is Better Than NTFS

Overview

Resilient File System (ReFS) is a new file system introduced in Windows Server 2012. Initially, it is being targeted for implementation as a file system that is primarily used for file servers. However, starting as the file system for a file server is just the beginning. Like its predecessor, NTFS, ReFS will begin as a file server system, then become a mainstream file system. Before long, we will all be using ReFS on our boot partitions.
So why would you want to change file systems? If NTFS is working, why should anybody even consider switching to ReFS? ReFS is better and faster in many ways than NTFS, but in one way more than all others: its resiliency.
Resilient File System will likely replace NTFS completely within the next versions of Windows, and here are some reasons why you are going to really love the new file system.

4) ReFS Supports Long File Names and File Path. Really Long.

Capacity is just one of the ways that ReFS is making changes. There will no longer be a limitation of 255 characters for a long file name. A file name in ReFS can be up to 32,768 unicode characters long! The limitation on full path size has also been updated from 255 characters for the total path size to 32K (32,768).
The legacy 8.3 naming convention is no longer stored as part of the file data. There is only one file name, and it can be a very long name.
Other changes have increased the capacity as well, though it is unlikely that the maximum size of a single volume will impact a real person. NTFS already had a maximum volume size of 16 Exabytes. The ReFS format allows a maximum volume size of 262,144 Exabytes.

3) ReFS is Much Better at Handling Power Outages

NTFS stores all of its file information in metadata. The filename is stored in the metadata. The location on the hard disk is stored in the metadata. When you rename a file, you’re changing the metadata. Likewise, ReFS stores its file information in metadata.
One big difference in how NTFS and ReFS are different is in the way they update the metadata. NTFS performs like metadata updates, which means that the metadata is updated “in-place.” The metadata says your new folder is named “New Folder,” and then you rename it to “Downloaded Files.” When you make the change, the actual metadata itself is written over. When a power outage occurs at the time you’re updating a disk, the metadata can be partially or completely overwritten, causing data corruption (called a “torn write”). You may experience a BSOD when you try to restart, or you may find that your data is no longer accessible.
ReFS does not update the metadata in-place. Instead, it creates a new copy of the metadata, and only once the new copy of the metadata is intact and all the writes have taken place does the file update itself with the new metadata. There are further improvements to the way that ReFS handles writes to the metadata, but for the most part the other changes are performance improvements. This new way of updating metadata allows you to reliably and consistently recover from power outages without disk corruption.
“We perform significant testing where power is withdrawn from the system while the system is under extreme stress, and once the system is back up, all structures are examined for correctness. This testing is the ultimate measure of our success. We have achieved an unprecedented level of robustness in this test for Microsoft file systems. We believe this is industry-leading and fulfills our key design goals.”
- Surendra Verma, “Building the Next Generation File System for Windows 8”
Development Manager, Storage and File Systems
Microsoft

2) ReFS works with Storage Spaces to Better Detect and Repair Problems

Storage Spaces is a storage virtualization technology. Storage Spaces was not made to run exclusively with ReFS, but they do work great together. ReFS has improved functionality when used in conjunction with Storage Spaces. Likewise, some of the redundancy features that Storage Spaces offers are able to be leveraged because of the abilities of ReFS.
So ReFS can be used without Storage Spaces, and Storage Spaces can be used without ReFS, but when they are used together, both ReFS and Storage Spaces both work more effectively. Storage Spaces uses mirroring, spreading copies of data across multiple physical data drives. When Storage Spaces finds a problem with even one piece of corrupt data on a drive, the corrupt data will be removed from the drive, and will be replaced with a known good copy of the data from another one of the physical drives.
ReFS uses checksums on the metadata to ensure that the data has not been corrupted. When Storage Spaces finds mismatched data between two or more copies of the same file, it can rely on the built-in metadata checksums that are a feature of ReFS. Once the checksums are validated, the correct data is copied back to the other physical drives, and the corrupted data is removed.
Occasionally, an ReFS drive controlled by Storage Spaces will undergo routine maintenance called “scrubbing.” Scrubbing is a task that runs on each file in a Storage Space. Checksums are verified, and if there are any checksums that are found to be invalid, the corrupted data is replaced with known good data from a physical drive that has a valid checksum. Scrubbing is on by default, but can be customized and configured even on individual files.


1) ReFS Volumes can Stay Live even if they have Irreparable Corruption

With NTFS, even a small amount of data corruption can cause big problems. With ReFS you are much less likely to have problems. In a case where a system is not using Storage Spaces and mirroring, or if for some strange reason one part of the data across the whole mirror is corrupt, only the corrupt parts will be removed from the volume, and the volume itself will stay active, thanks to “salvage.”
Salvage can remove even a single file that is corrupt. Once the corrupt data is removed, the volume is brought back. This turns what is usually a server that is brought offline for time consuming disk checking utilities to find and repair the entries, to a volume which is repaired except for the corrupt data files and brought back online in under one second.

Conclusion

Just like NTFS, ReFS brings with it some major improvements which will become a normal part of our industry for the likely future. Specifically, ReFS brings improvements in the way that metadata is updated, and by using checksums to ensure that corrupt data is easily found and repaired.
ReFS is the most robust file system from Microsoft to date, with reliability built in to make the most of our time and reduce the total cost of ownership on Windows Servers.
Michael Simmons

Overview of the File Server Role in Windows Server 8 Failover Clustering

Introduction

The next version of Windows Server has been officially dubbed and the name comes as no surprise to IT pros who have used the last three versions: It’s Windows Server 2012. My next few articles will delve into some of its new and improved features, beginning this time with an overview of the file server role in failover clustering.
In operating systems prior to Windows Server 2012, highly available file services were provided by failover cluster Client Access Point (CAP) that clients could use to connect to SMB (Server Message Block) or Network File System (NFS) shares on physical disk resources. If you deployed a shared-nothing cluster, only one node in a cluster File Server group could be online. In the event of a failure or if the File Server group was moved to another cluster node, clients were disconnected and had to reconnect when the group became available on an online node in the cluster. 
In Windows Server 2012, the File Server Role has been expanded to include a new scenario where application data (specifically Hyper-V and SQL Server) is supported on highly available SMB shares in Windows Server 2012 Failover Clustering. This is called Scale-Out File Services and uses the following:
  • a new client access method using a new cluster resource type, called a Distributed Network Name (DNN)
  • Cluster Shared Volumes v2 (CSVv2)
  • SMB v3 improvements, which enables continuous availability and transparent failover. 
SMB v3 allows SMB connections to be distributed across all nodes in the cluster that have simultaneous access to all shares. This can make it possible to provide access with almost zero downtime.

Installing the General Use File Server Role

File servers in a cluster can be configured for general use (such as users storing files in shares) or to support application storage for Hyper-V and SQL. The General Use File Server in Windows Server 2012 is almost the same as it was in Windows Server 2008 R2. The only significant difference is that shares can be made continuously available with the help of the SMB 3.0 protocol.
The following steps show the installation options for installing the General User File Server role on a Windows Server 2012 failover cluster:
  1. Click on Configure Role in the Actions pane in Failover Cluster Manager.
  2. Click  Next on the Before You Begin page.
  3. On the Select Role page, select the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster, and click Next.

Figure 1
  1. On the File Server Type page, select File Server for general use and click Next. Note that when you select this option, you have support for SMB and NFS shares, and you can also use File Server Resource Manager, Distributed File System Replication and other File Services role services.

Figure 2
  1. On the Client Access Point page, enter the information for the Client Access Point (CAP) and click Next.
  2. On the Select Storage page, enter a storage location for the data and click Next.
  3. On the Confirmation page, read the Confirmation information and click Next.
  4. On the Summary page, you can click the View Report button if you want to see details of the configuration. Click Finish.
Now that the role is installed, you can create file shares on the failover cluster.
Perform the following steps to create the file shares:
  1. Click the File Server Role in the Failover Cluster Manager and in the Actions pane, click Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. The Select Profile page presents you with five options. For our purposes, you can choose either SMB Share - Basic or SMB Share - Advanced and click Next

Figure 3
  1. On the Share Location page, choose a Share Location and click Next.
  2. On the Share Name page, provide a Share Name and click Next.
  3. On the Other Settings page, there are a number of additional share settings from which you can choose. Notice that Enable Continuous Availability is checked by default; this is to take advantage of the new SMB v3 functionality (Transparent Failover). Another new feature in SMB v3 enables you to encrypt the SMB connection without requiring the overhead of IPsec. You can find out more about SMB v3 here. Click Next.

Figure 4
  1. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions). Click Next

Figure 5
  1. On the Confirmation page, review the information and click Create.
When the share is configured, it will appear in the Shares tab.

Figure 6
If you prefer the command line, you can also get information about the share by using the PowerShell cmdlet Get-SMBShare.
Another place you can find share information is in the File and Storage Services Management Interface in Server Manager.

Installing the Scale-Out File Server Role

The Scale-Out File Server role is new in Windows Server 2012. With the many new technologies in Windows Server 2012, you can provide continuously available file services for application data and, at the same time, respond to increased demands quickly by bringing more servers online. Scale-Out File Servers take advantage of new features included in Windows Server 2012 Failover Clustering. The key new features that are included in Windows Server 2012, which enable the Scale Out Server Role, include the following:
  • Distributed Network Name (DNN) – this is the name that client systems use to connect to cluster shared resources
  • Scale-Out File Server resource type
  • Cluster Shared Volumes Version 2 (CSVv2)
  • Scale-Out File Server Role
Note that Failover Clustering is required for Scale-Out File Servers and the clusters of Scale Out File Servers are limited to four servers. Also, the File Server role service must be enabled on all nodes in the cluster. 
SMB v3, which is installed and enabled by default in Windows Server 2012, provides several features that support continuous availability of file shares to end users and applications. It’s important to point out that Scale-Out File Servers support storing application data on file shares and that SMB v3 will provide continuous availability for those shares for the two supported applications, which are Hyper-V and SQL Server. Specific capabilities that exist as part of the new SMBv2.2 functionality include:
  • SMB2 Transparent Failover – this allows all members of the cluster to host the shared resources and makes it possible for clients to connect to other members of the cluster transparently, without any perceptible disconnection on the client side.
  • MB2 Multichannel – this enables the use of multiple network connections to connect to cluster hosted resources and enables the cluster members to be highly available by supporting out of the box NIC teaming and bandwidth aggregation.
  • SMB2 Direct (RDMA) – this makes it possible to take advantage of the full speed of the NICs without impacting the processors on the cluster members; it also makes it possible to obtain full wire speed and network access speeds comparable to direct attached storage.
For more information about the Scale-Out File Server role, check out this link.
Perform the following steps to create a Scale-Out File Server Role:
  1. Click Configure Role in the Actions pane in Failover Cluster Manager.
  2. On the Before You Begin page, click Next.
  3. On the Select Role page, click the File Server role. Make sure there are no errors indicating the role is not installed on all nodes in the cluster and click Next.

Figure 7
  1. On the File Server Type page, select File Server for scale-out application data and click Next. Note that when you select this role, there is support only for SMB v3 shares; that is, there is no support for NFS shares. In addition, with this configuration you will not be able to use some file server role services, such as FSRM and DFS replication.

Figure 8
  1. On the Client Access Point page, enter a valid NetBIOS name for the Client Access Point and click Next.
  2. On the Confirmation page, review the information and click Next.
  3. When the wizard completes, you can click the View Report button to see details of the configuration. Click Finish.
Now that the role is installed, you’re ready to create file shares for applications where you can place the application data.
Perform the following steps to create shared folders:
  1. Click the File Server Role in the Failover Cluster Manager, and in the Actions pane, click on Add File Share.
  2. The server configuration will be retrieved as a connection is made to the File and Storage Services Management interface.
  3. On the Select Profile page of the New Share Wizard, choose SMB Share - Server Application for the profile and click Next.

Figure 9
  1. On the Share Location page, you should see only Cluster Shared Volumes.  Select a volume where you want to place the share and click Next.

Figure 10
  1. On the Share Name page, enter a Share Name and click Next.
  2. On the Other settings page, note that Enable continuous availability is selected by default. Click Next.
  3. On the Permissions page, you can configure permissions to control access (both NTFS and share permissions) as needed. Click Next.
  4. Review the information on the Confirmation screen and click Create.
The Shares tab reflects all the shares that are configured on the CSV volumes.

Figure 11
The Distributed Network Name resource, which is part of the Scale-Out File Server role, has no dependencies on IP addresses; that means you don’t have to configure anything in advance for this to work. The reason for this is that the resource registers the node IP addresses for each node in the cluster in DNS. These IP addresses can be static IP addresses or they can be managed by DHCP. The IP address of each of the nodes in the cluster is recorded in DNS and is mapped to the Distributed Network Name. Clients then receive up to six addresses from the DNS server and DNS round robin is used to distribute the load.

Summary

In this article, we took a quick look at some of the new file server role capabilities included in Windows Server 2012. The traditional file server role continues with Windows Server 2012, but includes some nice new benefits, thanks to the new SMB v3 protocol, which provides for continuous availability and near zero downtime for file resources being hosted by the cluster. A new file services role, the Scale-Out File Server role, enables you to store application data for Hyper-V and SQL server, and is optimized for these applications that require continuous connectivity to these files over the network. Several improvements included in the SMB v3 protocol make it possible to host these files on a file server cluster and enable performance at wire speed and very close to the storage performance you can get with direct attached storage.

Author: Deb Shinder

Thursday, March 22, 2012

Speed Up Your DotNetNuke Portals

Speed Up Your DotNetNuke Portals

To improve the responsiveness of your DotNetNuke application, set the Performance Setting to Heavy Caching: Step 1: Change Cache Settings
  1. Log into your poral as host or another superuser
  2. From the Host menu, click Host Settings
  3. Expand the Advanced Settings section
  4. Expand the Performance Settings section
  5. Select Heavy Caching in the drop down for Performance Setting
  6. Click Update Other Alternatives:

TrayKeepAlive from Gotchasoft

Gotchasoft (www.gotchasoft.com) offers a windows application: TrayKeepAlive, which pings your DotNetNuke application's keepalive.aspx every fifteen minutes to keep IIS from shutting down the application. This significantly improves a sites response time with infrequent visits.

Blowery Compression Module

Scott McCulloch has a very comprehensive article on installing the Blowery Compression Module here: http://www.smcculloch.net/Home/tabid/35/ctl/ArticleView/mid/363/articleId/46/EnablingHTTPCompressionforDotNetNuke.aspx

SlipStream Module

SlipStream for DotNetNuke is available on Snowcovered, features:
White space removal; autmatically remove tabs, carriage returns, extra spaces, and comments from the raw HTML sent to the user.
View state removal; remove the nasty string of characters at the bottom of every transmission and hold it on the server.

Performance Settings

DotNetNuke can cache your page information in memory, so that subsequent attempts to access a page will show the same data and not reread from the database.
To improve the responsiveness of your DotNetNuke application, set the Performance Setting to Heavy Caching:
Log into your poral as host or another superuser

From the Host menu, click Host Settings



Expand the Advanced Settings section



Expand the Other Settings section



Select Heavy Caching in the drop down for Performance Setting



Click Update

www.ephost.com

Wednesday, March 21, 2012

SQL SERVER – The server network address “TCP://SQLServer:5023″ can not be reached or does not exist. Check the network address name and that the ports for the local and remote endpoints are operational. (Microsoft SQL Server, Error: 1418)

While doing SQL Mirroring, we receive the following as the most common error:
The server network address “TCP://SQLServer:5023″ cannot be reached or does not exist.
Check the network address name and that the ports for the local and remote endpoints are operational.
(Microsoft SQL Server, Error: 1418)
The solution to the above problem is very simple and as follows.
Fix/WorkAround/Solution: Try all the suggestions one by one.
Suggestion 1: Make sure that on Mirror Server the database is restored with NO RECOVERY option (This is the most common problem).
Suggestion 2: Make sure that from Principal the latest LOG backup is restored to mirror server. (Attempt this one more time even though the full backup has been restored recently).
Suggestion 3: Check if you can telnet to your ports using command TELNET ServerName Ports like “telnet SQLServerName 5023″.
Suggestion 4: Make sure your firewall is turned off.
Suggestion 5: Verify that the endpoints are started on the partners by using the state or state_desc column the of the sys.database_mirroring_endpoints catalog view. You can start end point by executing an ALTER ENDPOINT statement.
Suggestion 6: Try the following command as one of the last options.
GRANT CONNECT ON ENDPOINT::Mirroring TO ALL
Suggestion 7: Delete the end points and recreate them.
If any of above solutions does not fix your problem, do leave comment here. Based on the comment, I will update this article with additional suggestions.
Please note that some of the above suggestions can be security threat to your system. Please use them responsibly and review your system with security expert in your company.
http://blog.sqlauthority.com

Saturday, March 17, 2012

Hướng dẫn cài đặt và cấu hình First Lotus Domino Server

Sau khi đã cài xong Domino Server, bước tiếp theo là các bạn cài đặt và cấu hình First Domino
 I. Lên kế hoạch
1. Domino Domain: NN
2. Domino Named Network: TCPIP
3. Organization: COM
4. Organization Unit: NN
5. Server name: Mail
6. Static IP; 192.168.78.87
Phần tìm hiểu về các file ID tôi sẽ giải thích với các bạn trong topic Bảo mật trong Domino sắp tới.
II. Cài đặt
1.Nhấp chuột vào Start/All Program/Lotus Application/Lotus Domino Server
2. Cửa sổ Welcome to Domino Server Setup hiện lên, nhấn Next
3. Cửa sổ First or addtion server, hiện lên, chọn Set up the first server or a stand-alone server, chọn Next

4. Cửa sổ Provide a server name and title hiện ra, nhập các thông tin như hình, sau đó nhấn Next
5. Cửa sổ Choose your organization name hiện ra
  • Phần Organization name điền vào: COM
  • Sau đó nhấp chuột vào nút Customize...

  • Phần Organization Unit name: NN
  • Các bạn nhập mật khẩu cho OU Cert ID
  • Sau đó nhấn OK
  • Tiếp các bạn nhập mật khẩu cho OU ID, sau đó nhấn Next
6. Cửa sổ Choose the Domino domain name, gõ vào tên domain, nhấn Next ( ở đây tôi để mặc định domain là NN)
7.Cửa sổ Specify an Administraor name and password, gõ vào tên và mật khẩu của người quản trị
Chú ý: Nhớ tích vào Also save a local copy of the ID file ( nhân bản file admin.id ra local, tiện cho việc quản trị về sau).
11. Lựa chọn các dịch vụ cho Domino Server, ở đây tôi chọn LDAP, HTTP, Các dịch vụ Mail, các bạn có thể tùy biến dịch vụ bằng cách nhấp chuột vào nút Customize.... Sau khi lựa chọn xong, nhấn Next
12. Cửa sổ Domino network settings hiện ra, các bạn cấu hình các tham số cần thiết về TCP/IP, sau đó nhấn Next
13. Cửa sổ Sucure your Domino Server, để mặc định, nhấn Next
14. Cửa sổ Please review and confirm your chosen server setup options, nhấn Setup
15. Hệ thống đang cài đặt
16. Cửa sổ setup sumary, các bạn nhấp Finish để hoàn tất.
17. Nhấp đúp chuột vào biểu tượng Domino trên màn hình, các bạn tích vào các lựa chọn phù hợp với mình
  • Start Domino as a Windows service: khởi động domino như là một dịch vụ của windows ( ko có màn hình console)
    Start Domino as a regular application: khởi động domino như là một ứng dụng ( có màn hình console)
    Alway start Domino as a service at system startup: cài đặt services domino vào hệ thống, khởi động lúc start hệ thống
  • Ở đây tôi chọn; Start Domino as a regular application
Sau đó nhấn OK
18. Cửa sổ console của Domino hiện lên, hoàn tất việc cải đặt và cấu hình
Video cấu hình
Bài tiếp theo: Cài đặt Lotus Notes ( Notes Client, Domino Administration Tool)

Thursday, March 15, 2012

Blocking Skype and other IM protocols in Forefront TMG

It has never been easier to block instant messaging (IM) with Forefront Threat Management Gateway (TMG). If you’ve read my article that I wrote a couple of years ago on how to block IM protocols on ISA Server, you’ll definitely appreciate the ease with which you can do the same stuff more effectively with TMG.
In this post, I show you how you can block Skype, Google Talk, Yahoo Messenger, Live Messenger, etc using Forefront TMG 2010.
Before I go in to the step-by-step procedure, I want to highlight what’s happening in the background.
  • Microsoft Forefront TMG 2010 now comes with URL Filtering. URL filtering enables you to block web content belonging to a particular category such as Chat, Social Networking, or Pornography.
  • Another new feature in TMG 2010 is Outbound HTTPS inspection. This allows all HTTPS user traffic to be inspected by TMG
These are the two new features that we will leverage to block chat. Here is a summary of what we will do:
  • The only allowed traffic on your TMG server is regular web traffic (HTTP and HTTPS). I am against creating “generic” rules like “allow all” from internal to external when you have SecureNAT clients in your network as this defeats the purpose of filtering.
  • Turn on HTTPS inspection. Read my earlier post if you need help enabling HTTPS inspection.
  • In a “Deny” rule on your Web Access Policy, add the “Chat” URL category.
Why do you need HTTPS inspection?
Many IM clients and software like Skype, try to connect using dynamic UDP ports and eventually fail back using HTTPS. With HTTPS inspection turned on, TMG will be able to inspect inside HTTPS to see if the software is trying to request access from a blocked URL.

1. In the Forefront TMG console, locate your Web Access Policy that denies traffic. If you do not have one, right click on Web Access Policy in the left pane and choose Configure Web Access Policy.
image
2. Click on the “To” tab. Click the Add button.
image
3. Expand URL Categories. Add the “Chat” URL category to the list.
image

4. Click OK and Apply your changes. Wait for the changes to synchronize (Tip: you can verify this under Monitoring > Configuration)

Now for the best part: try connecting to Skype, or any of your favorite instant messaging software. Note that the web versions of these messengers are also blocked! Smile
image
image
image
image

image
image

On a closing note – you can use the same technique to block P2P (peer-to-peer) and file sharing applications like eMule, Kazaa, eDonkey, BitTorrent, etc using TMG. In step 3, choose “P2P/File sharing” URL category.
www.microsoftnow.com