Date

Cluster Disk failed with error code:’0x3e3′

Issue

Cluster Disk failed with error: Cluster resource ‘Cluster Disk xx’ of type ‘Physical Disk’ in clustered role ‘Available Storage’ failed. The error code was ‘0x3e3’ (‘The I/O operation has been aborted because of either a thread exit or an application request.’).

Symptom

  • A Cluster Shared Volume fails to come online on any node in the cluster.

The system event log will have the following Failover-Clustering and Volsnap event logged.

Log Name:      System

Source:        Microsoft-Windows-FailoverClustering

Event ID:      1069

Task Category: Resource Control Manager

Level:         Error

User:          SYSTEM

Description:

Cluster resource ‘Cluster Disk xx’ of type ‘Physical Disk’ in clustered role ‘Available Storage’ failed. The error code was ‘0x3e3’ (‘The I/O operation has been aborted because of either a thread exit or an application request.’).

Log Name:      System

Source:        volsnap

Event ID:      29

Task Category: None

Level:         Error

Keywords:      Classic

Description:

The shadow copies of volume x: were aborted during detection.

Cluster.log will log the following events

INFO  [RES] Physical Disk <Cluster Virtual Disk (VD3)>: HardDiskpWaitForPartitionsToArrive: Wait success and IOCTL_DISK_ARE_VOLUMES_READY completed with status=995

ERR   [RES] Physical Disk <Cluster Virtual Disk (VD3)>: OnlineThread: Failed while waiting for volume arrivals. Error 995

ERR   [RES] Physical Disk <Cluster Virtual Disk (xx)>: OnlineThread: Error 995 bringing resource online.

Cause

The shadow storage is too big causing Volsnap to take longer than 30 seconds to discover the shadow copies which results in the Cluster Shared Volume failing to come online.

Solution

The resolution will require downtime for the cluster.

1. Take all clustered resources Offline

2. Leave one node running and stop Cluster Service on the remaining nodes

3. On the remaining node disable the Cluster Service, Startup Type: Disabled

4. Disable ClusDisk by setting HKLM\System\CurrentControlSet\Services\ClusDisk\Start to 4 (Disabled)

5. Reboot the node

6. Using Disk Management, manually bring affected disk online

7. Use VssAdmin to decrease the size of the shadow storage

vssadmin resize shadowstorage /on=x: /for=x: /maxsize=1GB 

8. Set HKLM\System\CurrentControlSet\Services\ClusDisk\Start to 1 (Enabled)

9. Set the Cluster Service Startup Type: Automatic

10. Reboot the node

11. Start the remaining nodes in the cluster. Now the CSV should come online

Additional Information:

  • 300MB is usually the smallest amount you can specify and will effectively delete any existing VSS snapshots on your system

vssadmin Resize ShadowStorage /For=C: /On=C: /MaxSize=320MB 

  • Use the previous command with a greater number, such as 10GB:

vssadmin Resize ShadowStorage /For=C: /On=C: /MaxSize=10GB 

  • You can choose to allocate a percentage of disk space instead. Microsoft recommends 20%. That command would look like this for the C: drive.

vssadmin Resize ShadowStorage /For=C: /On=C: /MaxSize=20% 

  • It is also possible to allocate shadow copy storage space on a different drive, as along as it’s local:

vssadmin Resize ShadowStorage /For=C: /On=X: /MaxSize=200GB 

  • If VSS is disabled, you can create it by executing this command

vssadmin add shadowstorage /for=c: /on=c: 

AZmachina

Knowledge Shared = Knowledge2

We have created AZmachina blog to share our knowledge on Docker & Container and Kubernetes on Windows Servers with curious and enthusiastic novice learner. We hope that this will help them to take a swim in this vast ocean of Window Containers and Kubernetes

Happy Learning !
Recent Posts
Categories
Archives
Sumeet Kumar

Sumeet Kumar

I am Windows Core Engineer with 7+ years of experience in Windows Hyper-v, Failover Cluster, Windows Storage, Volume Shadow Copy (VSS), Docker & Containers on Windows Servers, Backup & Recovery, VMware vSphere EXSi & vCenter Server

RELATED

Articles

Leave a Reply

Your email address will not be published. Required fields are marked *