Académique Documents
Professionnel Documents
Culture Documents
Explanation
The ActiveSync service is monitored using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the ActiveSync health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the
appropriate recovery actions outlined in the following sections.
For example, to retrieve the ActiveSync health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy.
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is ActiveSyncCTPMonitor. The probe associated
with that monitor is ActiveSyncCTPProbe. To run this probe on server1.contoso.com, run the
following command:
d. In the command output, review the “Result” section of the probe. If the value is Succeeded, the
issue was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in
the following sections.
ActiveSyncDeepTestMonitor and ActiveSyncSelfTestMonitor Recovery
Actions
This monitor alert is typically issued on Mailbox servers. To perform recovery actions, follow these steps:
1. Start IIS Manager, and then connect to the server that is reporting the issue. Click Application Pools, and
then recycle the ActiveSync application pool that’s named MSExchangeSyncAppPool.
2. Rerun the associated probe as shown in step 2c in the Verifying the issue section.
3. If the issue still exists, recycle the entire IIS service by using the IISReset utility.
4. Rerun the associated probe as shown in step 2c in the Verifying the issue section.
5. If the issue still exists, restart the server. To do this, first failover the databases that are hosted on the server
by using the following command:
In this and all subsequent code examples, replace server1.contoso.com with the actual server name.
6. Next, verify that all databases have been moved off the server that is reporting the issue. To do this, run the
following command:
7. If the command output in step 6 shows no active copies on the server, restart the server. If the output does
show active copies, run steps 5 and 6 again.
8. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue section.
9. If the probe succeeds, failover the databases by running the following command:
10. If the probe still fails, you may need further assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
b. If any of the monitors that are listed in the command output are reported to be unhealthy, you must
address those monitors first. To do this, follow the troubleshooting steps that are outlined in the
ActiveSyncDeepTestMonitor and ActiveSyncSelfTestMonitor Recovery Actions section.
6. If all monitors on the Mailbox server are healthy, restart the CAS.
7. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue section.
8. If the probe continues to fail, you may need further assistance to resolve this issue. Contact a Microsoft
Support professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange
Server Solutions Center. In the navigation pane, click Support options and resources and use one of the
options listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
3. If the issue persists, recycle the entire IIS service by using the IISReset utility.
4. Wait 10 minutes, and then run the command shown in step 2 again to see whether the monitor remains
healthy.
5. If the issue persists, restart the server. If the server is a CAS, just restart the server. If the server is a Mailbox
server, do the following:
a. Failover the databases that are hosted on the server. To do this, run the following command:
Note In this and all subsequent code examples, replace server1.contoso.com with the actual server
name.
b. Verify that all the databases have been moved off the server that is reporting the issue. To do this,
run the following command:
If the command output shows no active copies on the server, restart the server.
6. After the server restarts, wait 10 minutes, and then run the command shown in step 2 again to determine
whether the monitor remains healthy.
7. If the monitor remains healthy, and if this is a Mailbox server, failover the databases by running the
following command:
8. If the probe continues to fail, you may need further assistance to resolve this issue. Contact a Microsoft
Support professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange
Server Solutions Center. In the navigation pane, click Support options and resources and use one of the
options listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
Explanation
The ActiveSync.Protocol health set works in conjunction with the ActiveSync health set. For detailed information
about the ActiveSync health set, see Troubleshooting ActiveSync Health Set.
User Action
It’s possible that the ActiveSync service recovered after it issued the alert. Therefore, when you receive an alert
that indicates that the ActiveSync health set is unhealthy, first verify that the issue still exists. For more information,
see Troubleshooting ActiveSync Health Set.
Explanation
The ActiveSync.Proxy health set works in conjunction with the ActiveSync health set. For detailed information
about the ActiveSync health set, see Troubleshooting ActiveSync Health Set.
User Action
The ActiveSync service might have been able to recover after it issued the alert. Therefore, when you receive an
alert that indicates that the ActiveSync health set is unhealthy, first verify that the issue still exists. For more
information, see Troubleshooting ActiveSync Health Set.
Explanation
The Autodiscover service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe can fail for any of the following common reasons:
The Autodiscover application pool (MSExchangeAutodiscoverAppPool) that is hosted on the monitored
Client Access server (CAS ) is not responding. Or, the Autodiscover application pool that is hosted on one or
more mailbox servers is not responding.
The CAS is experiencing networking issues and cannot connect to the Mailbox server or Domain Controller.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It’s possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the Autodiscover health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert is Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is AutodiscoverCtpMonitor. The probe associated
with that monitor is AutodiscoverCtpProbe. To run that probe on server1.contoso.com, run the
following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
8. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
9. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
10. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
11. If the issue still exists, restart the server.
12. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
13. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The Autodiscover.Protocol service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe can fail for any of the following common reasons:
The Autodiscover application pool (MSExchangeAutodiscoverAppPool) that is hosted on the monitored
Client Access server (CAS ) is not responding. Or, the Autodiscover application pool that is hosted on one or
more Mailbox servers is not responding.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the Autodiscover.Protocol health set details about server1.contoso.com, run
the following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is AutodiscoverSelfTestMonitor. The probe
associated with that monitor is AutodiscoverSelfTestProbe. To run that probe on
server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
8. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
9. If the issue still exists, recycle the IIS service using the IISReset utility or by running the following
command:
Iisreset /noforce
10. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
11. If the issue still exists, restart the server.
12. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
13. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The Autodiscover service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe can fail for any of the following common reasons:
The application pool that’s hosted on the monitored CAS is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the Autodiscover.Protocol health set details about server1.contoso.com, run
the following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is AutodiscoverSelfTestMonitor. The probe
associated with that monitor is AutodiscoverSelfTestProbe. To run that probe on
server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
Explanation
The DataProtection Health service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the Autodiscover.Protocol health set details about server1.contoso.com, run
the following command:
Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
b. Identify the probe that the monitor is based on. Note that most probes share the same name prefix.
By using the previous example, search for “ClusterNetwork*”:
For example, assume that the failing monitor is AutodiscoverSelfTestMonitor. The probe
associated with that monitor is AutodiscoverSelfTestProbe. To run that probe on
server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Troubleshooting steps
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism that was used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a failure Reason that describes why the probe failed.
For most issues that occur in high availability environments, you can run the Test-ReplicationHealth cmdlet to
help troubleshoot the cluster/networking/ActiveManager/services. Other HealthSet/Components will have
different Test-* cmdlets.
For example:
Test-ReplicationHealth <ServerName>
If all components display Passed in the Result column, try to rerun the associated probe as shown in step 2c in the
Verifying the issue still exists section.
If the issue still exists, restart the server. After the server restarts, rerun the associated probe as shown in step 2c in
the Verifying the issue still exists section.
If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server Solutions
Center. In the navigation pane, click Support options and resources and use one of the options listed under Get
technical support to contact a Microsoft Support professional. Because your organization may have a specific
procedure for directly contacting Microsoft Product Support Services, be sure to review your organization's
guidelines first.
Explanation
The EAC service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue.
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the ECP health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is EacSelfTestMonitor. The probe associated with that
monitor is EacSelfTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Explanation
The EAC service is monitored by using the following probes and monitors:
For more information about probes and monitors, see Server health and performance.
Common issues
This probe may fail for any of the following common reasons:
The application pool that’s hosted on the monitored CAS is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the ECP.Proxy health set details about server1.contoso.com, run the
following command:
b. Review the command output, and determine which monitor reported the error. The AlertValue
value for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is ECPProxyTestMonitor. The probe associated with
that monitor is ECPProxyTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
EWS is monitored by using the following probes and monitors.
This probe performs a full EWS logon from the Client Access server (CAS ) to a Mailbox server by using a
monitoring account. This probe calls the GetFolder method on EWS. For more information about probes and
monitors, see Server health and performance.
Common issues
This probe can fail for any of the following common reasons:
A mismatch exists between the authentication mechanism that is used by the probe and the authentication
mechanism that is used on the CAS virtual directory.
The EWS Application pool in the CAS that’s being monitored is not responding.
The CAS is experiencing networking issues when it connects to the Mailbox server.
The CAS is experiencing communication issues when it connects to Domain Controllers.
The Domain Controllers are not responding.
The EWS Application pool that resides on one or more Mailbox servers is not responding.
The user’s database is not mounted, or the Information Store is unavailable for a specific mailbox.
The Information Store service on one or more Mailbox servers is experiencing issues.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the EWS health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that‘s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
Invoke-MonitoringProbe <health set name>\<probe name> -Server <server name> | Format-List
For the EWS health set, assume that the failing monitor is EWSCtpMonitor. The probe associated
with that monitor is EWSCtpProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
4. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
5. If the issue still exists, recycle the entire IIS service by using the IISReset utility.
6. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
7. If the issue still exits, review the protocol log files on the CA and Mailbox servers. The protocol logs for the
CAS reside in the <exchange server installation directory>\Logging\HttpProxy\Ews folder. On the Mailbox
server, the logs reside in the <exchange server installation directory>\Logging\Ews folder.
8. Create a test user account, and then log on by running the test user account against the given CAS. For
example, log on by using: https:// <servername>/ews/exchange.asmx. If the issue still exists, try a different
CAS to determine whether the problem is scoped to that CAS and not to the Mailbox server. If the test user
name passes, an issue may affect the specific Mailbox database or Mailbox server on which the monitoring
mailbox is located. Try to repeat this step by using a test account that exists in the Mailbox database.
9. Check network connectivity between the CA and Mailbox server.
10. Check for any alerts on the EWS.Proxy Health Set that might indicate a problem that affects a specific CAS.
11. Check for any alerts on the EWS.Protocol Health Set that might indicate a problem that affects a specific
Mailbox server.
12. If the issue still exists, restart the server. To do this, first failover the databases that are hosted on the server.
To do this, run the following command:
Note In this and all subsequent code examples, replace server1.contoso.com with the actual server name.
13. Verify that all the databases have been moved off the server that is reporting the issue. To do this, run the
following command:
If the command output shows no active copies on the server, restart the server.
14. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
15. If the probe succeeds, failover the databases back to the Mailbox server by running the following command:
16. If the probe is still failing, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The EWS.Protocol health set is composed of the following probes:
1. EwsSelfTestProbe
2. EwsDeepTestProbe
The EwsSelfTestProbe does not depend on the Information Store. However, the EwsDeepTestProbe probe
depends on the Information Store. Both of these probes perform EWS operations on the Mailbox server, and they
use the same authentication method as a Client Access server (CAS ). EwsSeftTestProbe calls the ConvertId
method, and EwsDeepTestProbe calls the GetFolder method.
For more information about probes and monitors, see Server health and performance.
When you receive an alert from this HealthSet, the email message contains the following information:
1. Name of the Mailbox server on which the alert originated
2. Full exception trace of the last error, including diagnostic data and specific HTTP headers information
3. Time when the incident occurred
Common issues
This probe can fail for any of the following common reasons:
The EWS Application pool on the monitored Mailbox server is not functioning correctly.
The Domain Controllers are not responding, or they cannot communicate with the Mailbox server.
The user’s database is not mounted, or the Information Store is unavailable for a specific mailbox.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the EWS.Protocol health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is EWSSelfTestMonitor. The probe associated with
that monitor is EWSSelfTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
4. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
5. If the issue still exists, restart the IIS service by using the IISReset utility.
6. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
7. If the issue still exits, review the protocol log files on the Mailbox server. On the Mailbox server, the logs
reside in the <exchange server installation directory>\Logging\Ews folder.
8. Create a test user account, and then log on by using the test user account against the given Mailbox server
on port 444 https://<servername>:444/ews/exchange.asmx. If the test is successful, an issue may affect the
specific mailbox database or Mailbox server on which the monitoring mailbox is located. Try to repeat this
step by using a test account on that database.
9. Check for any alerts on the EWS.Protocol Health Set that might indicate a problem that affects the specific
Mailbox server.
10. If the issue still exists, restart the server. To do this, first failover the databases hosted on the server by using
the following command:
In this and all subsequent code examples, replace server1.contoso.com with the actual server name.
11. Verify that all the databases have been moved off the server that is reporting the issue. To do this, run the
following command:
If the command output shows no active copies on the server, the server is save the restart. Restart the
server.
12. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
13. If the probe succeeds, failover the databases by running the following command:
14. If the probe is still failing, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
For More Information
What's new in Exchange 2013
10/5/2018 • 3 minutes to read • Edit Online
Explanation
The EWS service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe can fail for any of the following common reasons:
The application pool that’s hosted on the monitored CAS is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the EWS.Proxy health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Verifying the issue still exists section to find the associated probe. To do this, run the following
command:
For example, assume that the failing monitor is EWSProxyTestMonitor. The probe associated with
that monitor is EWSProxyTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
Explanation
The FIPS service is monitored using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the FIPS health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the
appropriate recovery actions outlined in the following section.
For example, to retrieve the FIPS health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy.
10/5/2018 • 2 minutes to read • Edit Online
Explanation
The HubTransport service is monitored using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the HubTransport health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform
the appropriate recovery actions outlined in the following section.
For example, to retrieve the HubTransport health set details about mailbox1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy.
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is ActiveQueueDrainFailureMonitor. The probe
associated with that monitor is ActiveQueueDrainFailureProbe. To run this probe on
mailbox1.contoso.com, run the following command:
d. In the command output, review the "Result" section of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists.
10/5/2018 • 7 minutes to read • Edit Online
Explanation
The IMAP4 service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the IMAP health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is ImapCTPMonitor. The probe associated with that
monitor is ImapCTPProbe. To run that probe on server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
ImapTestDeepMonitor and ImapSelfTestMonitor Recovery Actions
1. Restart the Exchange IMAP4 service on the back-end server. For more information about how to stop and
start the IMAP4 service, see Start and stop the IMAP4 services.
2. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
3. If the issue still exists, you must failover the databases hosted on the mailbox server by using the following
command:
4. Verify that all databases have been moved off the server that’s reporting the issue. To do this, run the
following command:
If the command output shows no active copies on the server, restart the server.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the probe succeeds, failover the databases by running the following command:
7. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
b. Restart the Exchange IMAP4 service on the backend server. For more information about how to stop
and start the IMAP4 service, see Start and stop the IMAP4 services
c. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
d. Run the following command, and then determine the location of the log file. To do this, run the
following command:
Get-ImapSettings -server <CAS server name>
e. Determine the mailbox that is serving this command. The name of the mailbox server is the value for
_Mbx: value in the error message.
Note In this command, replace mailbox1.contoso.com with the actual Mailbox server name.
g. If any of the monitors that are listed in the command output are reported as unhealthy, you must
address those monitors first. Follow the troubleshooting steps outlined in the ImapTestDeepMonitor
and ImapSelfTestMonitor Recovery Actions section.
4. If the Mailbox server is reported as healthy, restart the CAS.
5. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
6. Turn off protocol logging. To do this, run the following Windows PowerShell command:
AverageCommandProcessingTimeGt60sMonitor
RequestsQueuedGt500Monitor Recovery Actions
This monitor alert is typically issued on CA and Mailbox servers.
1. Restart the Exchange IMAP4 service on the back-end server or CAS. For more information about how to
stop and start the IMAP4 service, see Start and stop the IMAP4 services
2. Wait 10 minutes to see whether the monitor stays healthy. After 10 minutes, run the following command:
Note In this command, replace server1.contoso.com with the actual server name.
3. Wait 10 minutes, and then run the command shown in step 2 again to see whether the monitor stays
healthy.
4. If the issue still exists, you must restart the server. If the server is a CAS, just restart the server. If the server
is a Mailbox server, do the following:
a. Failover the databases that are hosted on the server. To do this, run the following command:
Note In this and all subsequent code examples, replace server1.contoso.com with the actual server
name.
b. Verify that all databases have been moved off the server that is reporting the issue. To do this, run
the following command:
If the command output shows no active copies on the server, restart the server.
5. After the server restarts, wait 10 minutes, and then run the command shown in step 2 again to see whether
the monitor stays healthy.
6. If the monitor stays healthy, and if this is a Mailbox server, failover the databases by running the following
command:
7. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The IMAP.Protocol health set works in conjunction with the IMAP health set.
User Action
For more information about the IMAP health set, see Troubleshooting IMAP Health Set.
Explanation
The IMAP.Proxy health set works in conjunction with the IMAP health set.
User Action
For more information about the IMAP health set, see Troubleshooting IMAP Health Set.
Explanation
The MailboxTransport service is monitored using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the MailboxTransport health set is unhealthy, first verify that the issue still exists. If the issue does exist,
perform the appropriate recovery actions outlined in the following section.
For example, to retrieve the MailboxTransport health set details about mailbox1.contoso.com, run
the following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy.
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is MailboxDeliveryAvailabilityMonitor. The probe
associated with that monitor is MailboxDeliveryAvailability. To run this probe on
mailbox1.contoso.com, run the following command:
d. In the command output, review the "Result" section of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists.
10/5/2018 • 2 minutes to read • Edit Online
Explanation
The MRS service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the MRS health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is MRSServiceCrashingMonitor. The probe
associated with that monitor is MRSServiceCrashingProbe. To run that probe on
server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Common Issues
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a Failure Reason that describes why the probe failed.
Mailbox Locked
When a Mailbox is locked, you may receive an alert that resembles the following:
MailboxIdentity: namprd03.prod.outlook.com/Microsoft Exchange Hosted Organizations/example.com/User6
MailboxGuid: Primary (00000000 -abcd -01234 -5678 -1234567890ab ) RequestFlags: IntraOrg, Pull, Protected
Database: exampledb -db089 Exception: MapiExceptionADUnavailable: Unable to prepopulate the cache for user
…
This indicates that a mailbox is locked. To unlock the mailbox, run the following command:
Note In this command, replace <mailboxIdentity> with the name of the mailbox that’s provided in the email
message as MailboxIdentity. If the mailbox is an archive mailbox, you must include the –Archive flag. You can
determine whether a mailbox is a primary or archive mailbox by viewing the MailboxGuid field in the alert.
Corrupt Migration Job
When a corrupted migration job occurs, you may receive an alert that resembles the following:
Notification thrown by MailboxMigration at 9/7/2012 9:08:32 PM. Details: Diagnostic Information:
ProcessCacheEntry: First Organization :: /o=ExchangeLabs/ou=Exchange Administrative Group
(FYDIBOHF23SPDLT )/cn=Recipients/cn=e80fc128879e452ebc882f6bca7007fa -Migration.8
Corruption occurs when the migration meta-data has encountered issues. Upon corruption, Microsoft will receive
a Watson report that will be investigated.To recover from this issue, you must remove the migration batch, and
then re-create the batch. To do this, follow these steps:
1. To remove the corrupted batch, run the following command:
Remove-MigrationBatch -Identity
When this issue occurs, a Dr. Watson message is sent to Microsoft for investigation.
The Migration Exchange Replication Service is not running
When you see this error reason, you can verify the health of the service by running the following command:
You can also try to start the service by running the following command:
Start-Service msexchangemailboxreplication
You can also try to restart the service by running the following command:
Restart-Service msexchangemailboxreplication
MSExchangeMailboxReplication Service is repeatedly crashing
When the MSExchangeMailboxReplication service crashes or stops responding, you may receive an alert that
resembles the following:
The MRS process has crashed at least 3 times in last 01:00:00. <b>Watson Message:</b> Watson report about to
be sent for process id: 41432, with parameters: E12, <ServerName>, 15.00.0516.024,
MSExchangeMailboxReplication, M.Exchange.MailboxReplicationService, M.E.M.BaseJob.BeginJob,
System.ApplicationException, 7ec9, 15.00.0516.024. ErrorReportingEnabled: True.
When this issue occurs, you can verify the health of the service by running the following command:
You can also try to restart the service by running the following command:
Restart-Service msexchangemailboxreplication
You can also try to restart the service by running the following command:
Restart-Service msexchangemailboxreplication
3. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
4. If the issue still exists, recycle the IIS service by using the IISReset utility, or by running the following
command:
Iisreset /noforce
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, restart the server.
7. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
8. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The OAB service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe may fail for any of the following common reasons:
The application pool that’s hosted on the monitored CAS is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the OAB.Proxy health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is OABProxyTestMonitor. The probe associated with
that monitor is OABProxyTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The Outlook.Proxy service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe may fail for any of the following common reasons:
The application pool that is hosted on the monitored CAS is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the Outlook.Proxy health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is OutlookProxyTestMonitor. The probe associated
with that monitor is OutlookProxyTestProbe. To run that probe on server1.contoso.com, run the
following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The Outlook Web App service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe can fail for several reasons. The following are some of the more common reasons:
The Outlook Web App application pool that’s hosted on the monitored Client Access server (CAS ) is not
responding, or the application pool that’s hosted on the Mailbox server is not responding.
The CAS is experiencing networking issues, and it can’t connect to the Mailbox server or the Domain
Controller.
The monitoring account credentials are incorrect.
The user’s database is not mounted, or the Information Store is inaccessible for that mailbox.
The Information Store is not responding.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
Outlook Web App health set details about server1.contoso.com, run the following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert is Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, to create an Exchange ActiveSync monitoring probe on server1.contoso.com, run the
following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
In the returned object, you can locate the user’s database name, and you can also determine where the
currently active database resides.
8. If you have configured redirection between sites, you may see probes failing and generating a
MissingKeyword error. This occurs because, by default, CA probes are run on accounts for any location, and
also because the probe does not try to test a CAS on a different site when it uses redirection. To resolve this
problem, make sure that the servers on each site are contained in MonitoringGroups. CA servers in a given
monitoring group test only together with Mailbox servers in the same group.
To determine the monitoring groups for your servers, run the following command:
Get-ExchangeServer | ft MonitoringGroup
To modify the monitoring group on a server, use the MonitoringGroup parameter together with the Set-
ExchangeServer cmdlet. For example, use the following:
9. In IIS Manager, click Application Pools, and then recycle the MSExchangeOWAAppPool application
pool by running the following command from the Exchange Management Shell:
10. Rerun the associated probe, as shown in step 2c in the Verifying the issue still exists section.
11. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
12. Rerun the associated probe, as shown in step 2c in the Verifying the issue still exists section.
13. If the issue still exists, restart the server.
14. After the server restarts, rerun the associated probe, as shown in step 2c in the Verifying the issue still exists
section.
15. If the probe continues to fail, you may require assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The OWA service is monitored by using the following probes and monitors.
The OwaSelfTestProbe probe sends a single HTTP request to the following address:
https://localhost:444/owa/exhealth.check. The probe confirms that the application pool is responding by returning
a 200 OK status code. This probe has no dependency on any other Exchange component.
The OwaDeepTestProbe probe is run against each Mailbox database by using copies on the current server. The
probe determines that a full logon can be made against that server. To do this, it simulates the type of traffic that’s
generated by a Client Access server (CAS ) against that specific server. The probe depends on Active Directory
Domain Services (AD DS ) for authentication, and on the Mailbox Store for mailbox access. For more information
about probes and monitors, see Server health and performance.
Common issues
This probe may fail for any of the following common reasons:
The OWA application pool that’s hosted on the monitored CAS is not responding, or the application pool
that’s hosted on the Mailbox server is not responding.
The CA or Mailbox server is experiencing networking issues, and it can’t connect to the other server or to a
Domain Controller.
The monitoring account credentials are incorrect.
The user’s database is not mounted, or the Information Store is inaccessible for that mailbox.
The Information Store is not responding.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the OWA.Protocol health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue for the
monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor was OwaSelfTestMonitor. The probe associated with
that monitor is OwaSelfTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Type of probe that failed (SelfTest or DeepTest)
Time and date when the alert occurred
Path to the folder in which you can find the full HTTP request traces for the probe
By default, the trace files are located in the following folders:
SelfTestProbe: <ExchangeServer>\Logging\Monitoring\OWA\ProtocolProbe
DeepTestProbe: <ExchangeServer>\Logging\Monitoring\OWA\MailboxProbe
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contain a Failure Reason that describes why the probe failed. The Failure Reason
may be any of the following:
MissingKeyword An expected keyword was not found in the server response. In this case, the
exception contains the expected keywords.
NameResolution The DNS resolution is failing to resolve a given server name.
NetworkConnection The probe is receiving a network connection failure when it tries to connect
to the OWA application pool on CAFE.
UnexpectedHttpResponseCode The response contained an unexpected HTTP code. For example,
the server returned a 503 HTTP code.
RequestTimeout The server took too long to respond to a client request.
UnexpectedHttpResponseCode The response returned an unexpected HTTP code. For example,
the server returned a 503 HTTP code.
ScenarioTimeout The probe finished successfully, but took more than one minute to do so. This
usually indicates a system that’s being overloaded.
OwaErrorPage OWA returned an error page. The name of the error that caused the failure is
typically available on the exception message.
OwaMailboxErrorPage OWA returned an error page that contains a Mailbox Store-related error.
This usually indicates such problems as the Mailbox Store being down, or mailboxes that are being
dismounted.
The exception trace contains an important field named FailingComponent, in which the probe makes an
effort to try to determine and categorize the failure. For example, the probe may return any of the following
values:
Mailbox The probe can reach OWA, but it can’t connect to the Mailbox store. In this case, the probe
failed or the mailbox access latency caused the probe to fail and generate a ScenarioTimeout error.
When these types of failures occur, you should check the health of the Mailbox servers.
Active Directory The probe can reach OWA, but it can’t connect to AD DS. In this case, the probe
failed, or the AD DS call latencies may cause the probe to time-out. When these kinds of failures
occur, you should check the health of the Domain Controllers, and also check the network
connections between the CA and Mailbox servers and the Domain Controllers.
OWA This typically means that an error occurred inside the OWA layer. When these kinds of
failures occur, you must verify the health of the OWA process on the CA and Mailbox server, and also
check the network connections.
The exception also contains the most recent HTTP request and response information that was received
before the probe failed.
The escalation body contains the path to the probe logs that can be used to verify the full HTTP web
requests and responses that were sent when the probe failed. This file contains data only for failed probes
because only failed attempts are logged. You can use this information to obtain a more complete view of
why the test failed.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
9. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
10. If the issue still exists, restart the server.
11. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
12. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
6. Copy the HealthMailbox GUID, and then run the following command in the Shell:
In the returned object, you can locate the user’s database name, and you can also determine where the
currently active database resides.
7. In IIS Manager, click Application Pools, and then recycle the MSExchangeOWAAppPool application
pool by running the following command from the Shell:
8. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
9. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
10. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
11. If the issue still exists, restart the server.
12. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
13. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The OWA.Protocol.DEP service is monitored using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the OWA.Protocol.DEP health set is unhealthy, first verify that the issue still exists. If the issue does exist,
perform the appropriate recovery actions outlined in the following sections.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MSExchange OWA\InstantMessaging
This key should contain an ImplementationDLLPath string that points to the Microsoft.Rtc.Internal.Ucweb DLL.
The default location is C:\Program Files\Microsoft UCMA 4.0\Runtime\SSP\Microsoft.Rtc.Internal.Ucweb.dll .
To fix this issue, reinstall UCMA 4.0 or manually create the registry key. You can download UCMA 4.0 here: Unified
Communications Managed API 4.0 Runtime.
To fix this issue, reinstall UCMA 4.0. For more information, see Unified Communications Managed API 4.0
Runtime.
Notepad %ExchangeInstallPath%ClientAccess\Owa\web.config
2. Search for a key named IMServerName. If it's found, verify the FQDN of the Lync 2013 server. If the key is
not found, add it by performing the following steps.
a. Find the tag named <appSection>.
b. In the <appSection> node, add the following line:
For example:
c. To apply the changes in Outlook Web App, run the following command:
Notepad %ExchangeInstallPath%ClientAccess\Owa\web.config
2. Search for a key named IMCertificateThumbprint. If it's found, verify the thumbprint value is correct. If
the key is not found, add it by performing the following steps:
a. Find the tag named <appSection>.
b. In the <appSection> node, add the following line:
For example:
c. To apply the changes in Outlook Web App, run the following command:
Recovery actions for error: "IM Certificate has not become valid yet."
This error indicates the certificate that's used to integrate Lync 2013 and Outlook Web App has invalid dates. To
resolve this error, you need to configure a new certificate, and you need to add the new thumbprint value in the
IMCertificateThumbprint key in %ExchangeInstallPath%ClientAccess\Owa\web.config . For more information
about the certificate requirements, see the Enabling Instant Messaging on Outlook Web App section in Integrating
Microsoft Lync Server 2013 and Microsoft Outlook Web App 2013.
Recovery actions for error: "IM Certificate does not have a private key."
This error indicates the certificate that's used to integrate Lync 2013 and Outlook Web App does not have a private
key. To resolve this error, you need to configure a new certificate that has a private key, and you need to add the
new thumbprint value in the IMCertificateThumbprint key in %ExchangeInstallPath%ClientAccess\Owa\web.config
. For more information about the certificate requirements, see the Enabling Instant Messaging on Outlook Web
App section in Integrating Microsoft Lync Server 2013 and Microsoft Outlook Web App 2013.
10/5/2018 • 3 minutes to read • Edit Online
Explanation
The Outlook Web App service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
Common issues
This probe may fail for any of the following common reasons:
The application pool that’s hosted on the monitored Client Access server (CAS ) is not working correctly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the OWA.Proxy health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is OWAProxyTestMonitor. The probe associated with
that monitor is OWAProxyTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service by using the IISReset utility.
7. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The POP service is monitored by using the following probes and monitors.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the POP health set details about server1.contoso.com, run the following
command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is PopCTPMonitor. The probe associated with that
monitor is PopCTPProbe. To run that probe on server1.contoso.com, run the following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
4. After all the databases are removed from the Mailbox server, you must verify that the databases have been
moved successfully. To do this, run the following command:
5. Make sure that the server does not host any active copies of the database. Then, restart the server.
6. After the server has successfully restarted, rerun the associated probe as shown in step 2c in the Verifying
the issue still exists section.
7. If the probe succeeds, failover the databases by running the following command:
8. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
b. Restart the POP3 service on the servers that are running the CAS role.
c. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
d. Run the following command, and then find the location of the log file:
e. Run the following command to determine which mailbox is serving the command by comparing
time stamps with the probe:
f. If any of these servers are reported as unhealthy, follow the steps listed in the PopTestDeepMonitor
and PopSelfTestMonitor Recovery Actions section.
4. If the Mailbox server is reported as healthy, restart the CAS.
5. After the server restarts, rerun the associated probe as described in step 2c in the Verifying the issue still
exists section.
6. Turn off protocol logging by running the following command:
7. Restart the POP3 service on the servers that are running the CAS role. For more information, see Start and
stop the POP3 services.
8. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
3. If the issue still exists, you need to restart the server. If the server is a CAS, just restart the server. If the
server is a Mailbox server, you must failover the database, and verify the results. To do this, follow these
steps:
a. Failover the databases hosted on the server by using the following command:
Note In this and all subsequent code examples, replace server1.contoso.com with the actual server
name.
b. Verify that all databases have been moved off the server that’s reporting the issue. To do this, run the
following command:
If the command output shows no active copies on the server, restart the server.
4. After the server restarts, wait 10 minutes, and then run the command shown in step 2 again to see whether
the monitor stays healthy.
5. If the monitor is healthy, and this is a Mailbox server, failover the databases back to the Mailbox server by
running the following command:
6. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The POP.Protocol health set works in conjunction with the POP health set. For detailed information about all POP
health sets, see Troubleshooting POP Health Set.
Explanation
The POP.Proxy health set works in conjunction with the POP health set.
User Action
For detailed information about the POP health set, see Troubleshooting POP Health Set.
Explanation
The RPS service is monitored using the following probes and monitors:
For more information about probes and monitors, see Server health and performance.
Common issues
When this probe fails there can be multiple reasons for the problem. Some of the more common issues include the
following:
The application pool that is hosted on the monitored CAS server is not working properly.
The monitoring account credentials are incorrect.
The Domain Controllers are not responding.
User Action
It is possible that the service was able to recover after issuing the alert. Therefore, when you receive an alert that
specifies that the health set is unhealthy, the first thing you should do is to verify that the issue still exists. If it does,
then perform the appropriate recovery actions outlined in the following sections.
For example, to retrieve the RPS.Proxy health set details on server1.contoso.com run the following
command:
b. Review the command output and determine the monitor that reported the error. The AlertValue for
the monitor that issued the alert will read Unhealthy .
c. Rerun the associated probe for the monitor that is in unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do so, run the following command:
For example, let's assume that the failing monitor was RPSProxyTestMonitor. The probe associated
with that monitor is RPSProxyTestProbe. To run that probe on server server1.contoso.com, run the
following command:
d. In the command output, review the Result of the probe. If the value is Succeeded, then the issue
was a transient error and no longer exists. Otherwise refer to the recovery steps outlined in the
following sections.
5. Rerun the associated probe as shown in step 2.c. in the Verifying the issue still exists section.
6. If the issue still exists, recycle the IIS service using the IISReset utility.
7. Rerun the associated probe as shown in step 2.c. in the Verifying the issue still exists section.
8. If the issue still exists, restart the server.
9. After the server restarts, rerun the associated probe as shown in step 2.c. in the Verifying the issue still exists
section.
10. If the probe continues to fail, you may need assistance in resolving this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your organization
may have a specific procedure for directly contacting Microsoft Product Support Services, be sure to review
your organization's guidelines first.
Explanation
The SiteMailbox monitoring system receives passive sync results from the background synchronization service.
This system does not use any probes. The passive synchronization results are written to the SiteMailbox
monitoring system after every synchronization attempt. Synchronizations are also triggered when the following
events occur:
Users access their site mailboxes by using Outlook or Outlook Web App
You run the Update-SiteMailbox command
You open the Outlook Web App Options window, and then you click the Start Sync button on the Sync
Status page for the selected site mailbox
For more information about the Update-SiteMailbox cmdlet, see: Update-SiteMailbox
For more information about probes and monitors, see Server health and performance.
Common issues
The synchronization monitoring service typically triggers an alert when site-wide spread synchronization issues
occur. An alert is not sent when a single site mailbox fails to synchronize. To determine the cause of an above-
threshold alert for a single site mailbox, we recommend that you review the site mailbox synchronization log files.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the SiteMailbox health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
Troubleshooting steps
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a Failure Reason that describes why the probe failed.
Background synchronization errors
When the background synchronization process fails, you may receive an alert that resembles the following:
The Site Mailbox background sync is failing at least 25%: 41 failures out of 87 attempts.Sample sync result:
[Message:The remote server returned an error: (401) Unauthorized.][Type:System.Net.WebException]
This alert is triggered when a consistently high percentage of synchronization failures have occurred during the
previous four hours. To avoid false negatives, an alert is sent only when the following conditions are met within a
15-minute window during the previous four hours:
At least 20 failures occur within a 15-minute window.
The percentage of failures compared to total attempts exceed 25 percent within a 15-minute window.
Every site mailbox in Exchange is linked to an SharePoint site. For each of the site mailboxes on a given Exchange
server that is hosting the Mailbox role, the server synchronizes site mailbox-related information from SharePoint.
Two types of syncs take place during this process: membership sync and document sync. The metadata for these
sync processes originates from different web services. Additionally, a given Exchange server may contain site
mailboxes that are linked to several SharePoint servers or farms. Therefore, the alert may originate from multiple
Mailbox servers, depending on the following conditions:
1. How actively-used site mailboxes in the organization are distributed
2. The SharePoint servers to which the actively-used site mailboxes are linked
3. Whether the Mailbox server has sufficient sync volume to meet alert thresholds
To help resolve this issue, the sample synchronization result in the alert may help determine the cause of the
failure. Details about the success or failure of each sync attempt is recorded in the *<exExchangeSvrNoVersion
installation directory>*Logging\TeamMailbox folder. Review the most recent
Microsoft.Exchange.ServiceHost_TeamMailboxSyncLog* files for failures by searching on the term failed. You can
also use the Test-OAuthConnectivity, Test-SiteMailbox, and Get-SiteMailboxDiagnostics cmdlets to
troubleshoot further.
The MSExchangeServiceHost service is not running
If the MSExchangeServiceHost service is not running, you receive an alert that resembles the following:
The 'MSExchangeServiceHost' service is not running after the recovery attempts. The service may be disabled or
in crash loop.
To resolve this issue, verify that the MSExchangeServiceHost service is running on the server that sent the alert. If
the service is running, review the Windows event logs for indications of why the service may not have been
running earlier, such as manual service control or repeated crashes of the service.
The MSExchangeServiceHost service has crashed
If the MSExchangeServiceHost service crashes, you receive an alert that resembles the following:
The MSExchangeServiceHost process has crashed at least 3 times in last 60 minutes.
Watson Message:
<Message>
To resolve this issue, review the Windows Application event log on the server that sent the alert for 4999 events
regarding the MSExchangeServiceHost service. The detail text can provide information about the cause of the
issue.
Explanation
The UM service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the UM.Protocol health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is UMSelfTestMonitor. The probe associated with
that monitor is UMSelfTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Troubleshooting steps
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a Failure Reason that describes why the probe failed.
Sip Options to UM Service have failed
Determine whether the UM service is disabled. If the UM service is not started or disabled, restart the UM service.
More than {0}% of Inbound Calls were Rejected by the UM Service Over the Last Hour
Review the event logs on the Client Access server (CAS ) to determine whether the UM objects, such as
umipgateway and umhuntgroup, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
More than {0}% of Inbound Calls were Rejected by the UM Worker Process Over the Last Hour
Review the event logs on the CAS to determine whether the UM objects, such as the umipgateway and
umhuntgroup objects, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
Less than {0}% of Messages were Successfully Processed Over the Last Hour
Review the event logs on the CAS to determine whether the UM objects, such as the umipgateway and
umhuntgroup objects, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
The Microsoft Exchange Unified Messaging service rejected a call because the UM pipeline is full
Review the event logs on the CAS to determine whether the UM objects, such as the umipgateway and
umhuntgroup objects, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
The A/V Edge service is misconfigured or is not running
1. Review the event logs on Mailbox server to try to determine why calls from the Lync server are failing.
Then, do the following:
Make sure that the Lync pool that is selected by the UM service is operational.
To use a specific Lync server, run the following command:
The UM server was unable to acquire credentials successfully with the Communications Server A/V
Edge service
Review the event logs to investigate which Lync pool is selected, and to verify that the selected Lync pool is
operational.
The Communications Server Audio/Video Edge was unable to open a port or allocate resources while
attempting to establish a session
Review the event logs to investigate which Lync pool is selected, and to verify that the selected Lync pool is
operational.
The Microsoft Exchange Unified Messaging service certificate is nearing its expiration date
Renew the UM service certificate on the Mailbox server.
Additional troubleshooting steps:
1. Start IIS Manager, and connect to the server that’s reporting the issue to determine whether the
MSExchangeServicesAppPool application pool is running.
2. In IIS Manager, click Application Pools, and then recycle the MSExchangeServicesAppPool application
pool. To do this, run the following command from the Exchange Management Shell:
3. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
4. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, restart the server.
7. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
8. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
Explanation
The UM.Protocol service is monitored by using the following probes and monitors
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the UM.CallRouter health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that is in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is UMCallRouterTestMonitor. The probe associated
with that monitor is UMCallRouterTestProbe. To run that probe on server1.contoso.com, run the
following command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Troubleshooting steps
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a Failure Reason that describes why the probe failed.
Sip Options to UM Call Router Service have failed
Determine whether the UM Call Router service is disabled. If the UM Call Router service is not started or disabled,
restart the UM service.
More than 50% of Inbound Calls were Rejected by the UM Call Router over the Last Hour
Review the event logs on the Client Access server (CAS ) to determine whether the UM objects, such as
umipgateway and umhuntgroup, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
More than {0}% of Missed Call Notification Proxy failed at UM Call Router over the Last Hour
Review the event logs on the CAS to determine whether the UM objects, such as umipgateway and
umhuntgroup, are configured correctly.
If the event logs do not contain enough information, you may have to enable UM event logs at the Expert level,
and then review the UM trace log files.
The Microsoft Exchange Unified Messaging Call Router certificate is nearing its expiration date
Renew the UM Call Router service certificate on the CAS.
Additional troubleshooting steps:
1. Start IIS Manager, and then connect to the server that’s reporting the issue to determine whether the
MSExchangeServicesAppPool application pool is running.
2. In IIS Manager, click Application Pools, and then recycle the MSExchangeServicesAppPool application
pool by running the following command from the Shell:
3. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
4. If the issue still exists, recycle the IIS service by using the IISReset utility or by running the following
command:
Iisreset /noforce
5. Rerun the associated probe as shown in step 2c in the Verifying the issue still exists section.
6. If the issue still exists, restart the server.
7. After the server restarts, rerun the associated probe as shown in step 2c in the Verifying the issue still exists
section.
8. If the probe continues to fail, you may need assistance to resolve this issue. Contact a Microsoft Support
professional to resolve this issue. To contact a Microsoft Support professional, visit the Exchange Server
Solutions Center. In the navigation pane, click Support options and resources and use one of the options
listed under Get technical support to contact a Microsoft Support professional. Because your
organization may have a specific procedure for directly contacting Microsoft Product Support Services, be
sure to review your organization's guidelines first.
Explanation
The UM.Protocol service is monitored by using the following probes and monitors.
For more information about probes and monitors, see Server health and performance.
User Action
It's possible that the service recovered after it issued the alert. Therefore, when you receive an alert that specifies
that the health set is unhealthy, first verify that the issue still exists. If the issue does exist, perform the appropriate
recovery actions outlined in the following sections.
For example, to retrieve the UM.Protocol health set details about server1.contoso.com, run the
following command:
b. Review the command output to determine which monitor reported the error. The AlertValue value
for the monitor that issued the alert will be Unhealthy .
c. Rerun the associated probe for the monitor that’s in an unhealthy state. Refer to the table in the
Explanation section to find the associated probe. To do this, run the following command:
For example, assume that the failing monitor is UMSelfTestMonitor. The probe associated with
that monitor is UMSelfTestProbe. To run that probe on server1.contoso.com, run the following
command:
d. In the command output, review the Result value of the probe. If the value is Succeeded, the issue
was a transient error, and it no longer exists. Otherwise, refer to the recovery steps outlined in the
following sections.
Troubleshooting steps
When you receive an alert from a health set, the email message contains the following information:
Name of the server that sent the alert
Time and date when the alert occurred
Authentication mechanism used, and credential information
Full exception trace of the last error, including diagnostic data and specific HTTP header information
Note You can use the information in the full exception trace to help troubleshoot the issue. The exception
generated by the probe contains a Failure Reason that describes why the probe failed.
For more information about troubleshooting UJM alert messages, see Troubleshooting UM Health Set