Manage Logging

      +
      The Logging facility allows a record to be maintained of important events that occur on Enterprise Analytics.

      Logging Overview

      The Enterprise Analytics Logging facility records important events, and saves the details to log files, on disk. Additionally, events of cluster-wide significance are displayed on the Logs screen, in Enterprise Analytics Web Console.

      The Logs screen provides a comprehensive view of cluster events, displaying log messages with timestamps, severity levels, and detailed descriptions of system activities.

      By default, on Linux systems, log files are saved to /opt/enterprise-analytics/var/lib/couchbase/logs.

      Collecting Information

      On each node within an Enterprise Analytics-cluster, logging is performed continuously. A subset of the results can be reviewed in the Enterprise Analytics Web Console Logs screen; while all details are saved to the logs directory, as described above.

      The logs directory may include audit.log.

      This is a special log file, used to manage cluster-security, and is handled separately from the other log files. The information provided throughout the remainder of this page — on collecting, uploading, redacting, and more — does not apply to audit.log. For information about audit.log, see Auditing.

      Additionally, explicit logging can be performed by the user. This allows comprehensive and fully updated information to be generated as required. The output includes everything currently on disk, together with additional data that is gathered in real time. Explicit logging can either be performed for all nodes in the cluster, or for one or more individual nodes. The results are saved as zip files: each zip file contains the log-data generated for an individual node.

      Explicit logging can be performed by means of the Enterprise Analytics CLI utility cbcollect_info. The documentation for this utility, provided here, includes a complete list of the log files that can be created, and a description of the contents of each.

      Additionally, administrators with either the Full Admin or Cluster Admin role can perform explicit logging by means of Enterprise Analytics Web Console: on the Logs page, click on the Collect Information tab, located near the top.

      For administrators without either of these roles, this tab does not appear.

      This opens the Collect Information screen, which allows logs and diagnostic information to be collected either from all or from selected nodes within the cluster.

      The Collect Information screen includes the following options:

      • Node selection: Choose to collect logs from all nodes or select specific nodes.

      • Redact Logs panel: Specify a log redaction level (described in Applying Redaction).

      • Specify custom temp directory checkbox: Specify the absolute pathname of a directory for temporary data storage during collection.

      • Specify custom destination directory checkbox: Specify the absolute pathname for completed zip files.

      • Upload to Couchbase checkbox: Enable direct upload to Couchbase Support (described in Uploading Log Files).

      To start the collection process, follow these steps:

      1. Click the Start Collecting button.

      2. A notification displays indicating that the collection process is running.

      3. A stop button is provided to allow the collection process to be stopped if necessary.

      4. When the collection process completes for each node, a notification displays the progress.

      5. When the process has completed for all nodes, the system shows the results with details about the created log files.

      A set of log files is created for each node in the cluster. Each file is saved as a zip file in the specified temporary ___location.

      Uploading Log Files

      Log files can be uploaded to Couchbase, for inspection by Couchbase Support.

      For information about performing upload at the command-prompt, see cbcollect_info.

      To upload by means of Enterprise Analytics Web Console, before starting the collection process, check Upload to Couchbase.

      When the Upload to Couchbase option is selected, the interface expands to show additional fields:

      • Upload to Host field: Contains the server ___location to which the customer data is uploaded.

      • Customer Name field (required): Your organization or customer name.

      • Upload Proxy field (optional): Hostname of a remote system for proxy upload.

      • Bypass Reachability Checks checkbox: When unchecked (default), attempts to gather and upload without pre-verifying upload specifications. When checked, upload specifications are verified before collection begins.

      • Ticket Number field (optional): Support ticket number if available.

      When all required information has been entered, click the Start Collecting button to begin information collection. When collection and upload have been completed, the URL of the uploaded zip file is displayed.

      Getting a Cluster Summary

      A summary of the cluster’s status can be acquired by means of a link available in the Collect Information panel.

      Click Get cluster summary, which opens the Cluster Summary Info dialog.

      This dialog displays a JSON document containing detailed status on the current configuration and status of the entire cluster. The information can be copied to the clipboard using a Copy to Clipboard button. This information can then be manually shared with Couchbase Support, either in addition to, or as an alternative to log collection.

      Understanding Redaction

      Optionally, log files can be redacted. This means that user-data, considered to be private, is removed. Such data includes:

      • Key/value pairs in JSON documents

      • Usernames

      • Query-fields that reference key/value pairs and/or usernames

      • Names and email addresses retrieved during product registration

      • Extended attributes

      This redaction of user-data is referred to as partial redaction. (Full redaction, which will be available in a forthcoming version of Enterprise Analytics, additionally redacts meta-data.)

      In each modified log file, hashed text (achieved with SHA1) is substituted for redacted text. For example, the following log file fragment displays private data — a Couchbase username:

      0ms [I0] {2506} [INFO] (instance - L:421) Effective connection string:
      couchbase://127.0.0.1?username=Administrator&console_log_level=5&;.
      Bucket=default

      The redacted version of the log file might appear as follows:

      0ms [I0] {2506} [INFO] (instance - L:421) Effective connection string:
      <UD>e07a9ca6d84189c1d91dfefacb832a6491431e95</UD>.
      Bucket=<UD>e16d86f91f9fd0b110be28ad00e348664b435e9e</UD>
      Redaction may eliminate some parameters containing non-private data, as well as all parameters containing private.

      Redaction of log files may have one or both of the following consequences:

      • Logged issues are harder to diagnose, by both the user and Couchbase Support.

      • Log-collection is more time-consumptive, since redaction is performed at collection-time.

      Applying Redaction

      Redaction of log files saved on the cluster can be applied as required, when performing explicit logging, by means of either cbcollect_info or the Logs facility of Enterprise Analytics Web Console.

      For information about performing explicit logging with redaction at the command-prompt, see cbcollect_info.

      To perform explicit logging with redaction by means of Enterprise Analytics Web Console, before starting the collection process, access the Redact Logs panel, on the Collect Information screen. This panel features two radio buttons:

      • No Redaction: Collects logs without redaction

      • Partial Redaction: Removes sensitive user data from logs

      Select the Partial Redaction radio button to enable redaction. Guidance on redaction is displayed in the interface to help you understand the implications.

      Click the Start Collecting button to begin the process. A notification explains that the collection process is now running. When the process has completed, a notification appears, specifying the ___location (local or remote) of each created zip file.

      When redaction has been specified, two zip files are provided for each node: one file containing redacted data, the other unredacted data.

      Redacting Log Files Outside the Cluster

      Certain Couchbase technologies — such as cbbackupmgr, the SDK, connectors, and Mobile — create log files saved outside the Couchbase Cluster. These can be redacted by means of the command-line tool cblogredaction. Multiple log files can be specified simultaneously. Each file must be specified as plain text. Optionally, the salt to be used can be automatically generated.

      For example:

      $ cblogredaction /Users/username/testlog.log -g -o /Users/username -vv
      2018/07/17T11:27:06 WARNING: Automatically generating salt. This will make it difficult to cross reference logs
      2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Starting redaction file size is 19034284 bytes
      2018/07/17T11:27:07 DEBUG: /Users/username/testlog.log - Log redacted using salt: <ud>COeAtexHB69hGEf3</ud>
      2018/07/17T11:27:07 INFO: /Users/username/testlog.log - Finished redacting, 50373 lines processed, 740 tags redacted, 0 lines with unmatched tags

      For more information, see the corresponding man page, or run the command with the --h (help) option.

      Log File Locations

      Enterprise Analytics creates log files in the following locations.

      Platform Location

      Linux

      /opt/enterprise-analytics/var/lib/couchbase/logs

      Log File Listing

      The following table lists the log files to be found on Enterprise Analytics. Unless otherwise specified, each file is named with the .log extension.

      File Log Contents

      audit

      Security audit log for administrators.

      babysitter

      Troubleshooting log for the babysitter process which is responsible for spawning all Enterprise Analytics processes and respawning them where necessary.

      couchdb

      Troubleshooting log for the couchdb subsystem which underlies map-reduce.

      debug

      Debug-level troubleshooting for the Cluster Manager.

      error

      Error-level troubleshooting log for the Cluster Manager.

      http_access

      The admin access log records server requests (including administrator logins) to the REST API or Enterprise Analytics Web Console. It is output in common log format and contains several important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and so on.

      http_access_internal

      The admin access log records internal server requests (including administrator logins) to the REST API or Enterprise Analytics Web Console. It is output in common log format and contains several important fields such as remote client IP, timestamp, GET/POST request and resource requested, HTTP status code, and so on.

      info

      Info-level troubleshooting log for the Cluster Manager.

      json_rpc

      Log used by the cluster manager.

      mapreduce_errors

      JavaScript and other view-processing errors are reported in this file.

      memcached

      Contains information relating to the core memcached component, including DCP stream requests and slow operations.
      It is possible to adjust the logging for slow operations. See [adjust-threshold-slow-op-logging] for details.

      metakv

      Troubleshooting log for the metakv store, a cluster-wide metadata store.

      ns_couchdb

      Contains information related to starting up the couchdb subsystem.

      prometheus

      Log for the instance of Prometheus that runs on the current node, supporting the gathering and management of Couchbase-Server metrics . (See the Metrics Reference, for more information.)

      rebalance

      Contains reports on rebalances that have occurred. Up to the last five reports are maintained. Each report is named in accordance with the time it was run: for example, rebalance_report_2020-03-17T11:10:17Z.json. See the Rebalance Reference, for detailed information.

      reports

      Contains progress and crash reports for the Erlang processes. Due to the nature of Erlang, processes crash and restart upon an error.

      ssl_proxy

      Troubleshooting log for the ssl proxy spawned by the Cluster Manager.

      stats

      Contains periodic statistic dumps from the Cluster Manager.

      analytics_access

      information about access attempts made to the REST/HTTP port of the Analytics Service.

      analytics_cbas_debug

      Debugging information, related to the Analytics Service.

      analytics_dcpdebug

      DCP-specific debugging information related to the Analytics Service.

      analytics_dcp_failed_ingestion

      information about documents that have failed to be imported/ingested from the Data Service into the Analytics Service.

      analytics_debug

      Events logged by the Analytics Service at the DEBUG logging level.

      analytics_error

      Events logged by the Analytics Service at the ERROR logging level.

      analytics_info

      Events logged by the Analytics Service at the INFO logging level.

      analytics_shutdown

      Information concerning the shutting down of the Analytics Service.

      analytics_warn

      Events logged by the Analytics Service at the WARN logging level.

      Log File Rotation

      The memcached log file is rotated when it has reached 10MB in size; twenty rotations being maintained — the current file, plus nineteen compressed rotations. Other logs are automatically rotated after they have reached 40MB in size; ten rotations being maintained — the current file, plus nine compressed rotations.

      To provide custom rotation-settings for each component, add the following to the static_config file:

      {disk_sink_opts_disk_debug,
              [{rotation, [{size, 10485760},
              {num_files, 10}]}]}.

      This rotates the debug.log at 10MB, and keeps ten copies of the log: the current log and nine compressed logs.

      Log rotation settings can be changed.

      This is not advised, and only the default log rotation settings are supported by Couchbase.

      Changing Log File Locations

      The default log ___location on Linux systems is /opt/enterprise-analytics/var/lib/couchbase/logs. The ___location can be changed.

      This is not advised, and only the default log ___location is supported by Couchbase.

      To change the ___location, proceed as follows:

      1. Log in as root or sudo and navigate to the directory where Enterprise Analytics is installed. For example: /opt/enterprise-analytics/etc/couchbase/static_config.

      2. Edit the static_config file: change the error_logger_mf_dir variable, specifying a different directory. For example: {error_logger_mf_dir, "/home/user/cb/opt/enterprise-analytics/var/lib/couchbase/logs"}

      3. Stop and restart Enterprise Analytics. See Startup and Shutdown.

      Changing Log File Levels

      The default logging level for all log files is debug, except for couchdb, which is set to info. Logging levels can be changed.

      This is not advised, and only the default logging levels are supported by Couchbase.

      Either persistent or dynamic changes can be made to logging levels.

      Persistent Changes

      Persistent means that changes continue to be implemented, should an Enterprise Analytics reboot occur. To make a persistent change on Linux systems, proceed as follows:

      1. Log in as root or sudo, and navigate to the directory where you installed Couchbase. For example: /opt/enterprise-analytics/etc/couchbase/static_config.

      2. Edit the static_config file and change the desired log component. (Parameters with the loglevel_ prefix establish logging levels.)

      3. Stop and restart Enterprise Analytics. See Startup and Shutdown.

      Dynamic Changes

      Dynamic means that if an Enterprise Analytics reboot occurs, the changed logging levels revert to the default. To make a dynamic change, execute a curl POST command, using the following syntax:

      curl -X POST -u adminName:adminPassword HOST:PORT/diag/eval \
                    -d 'ale:set_loglevel(<log_component>,<logging_level>).'
      • log_component: The default log level (except couchdb) is debug; for example ns_server.

      • logging_level: The available log levels are debug, info, warn, and error.

        curl -X POST -u Administrator:password http://127.0.0.1:8091/diag/eval \
                        -d 'ale:set_loglevel(ns_server,error).'

      Collecting Logs Using the CLI

      To collect logs, use the CLI command cbcollect_info.

      To start and stop log-collection, and to collect log-status, use:

      Collecting Logs Using the REST API

      The Logging REST API provides the endpoints for retrieving log and diagnostic information.

      To retrieve log information use the /diag and /sasl_logs REST endpoints.

      Getting Threshold Details

      The current settings are retrieved by using the mcctl cli to execute the get sla command:

      These settings only apply to the nodes where the changes are made.

      You must implement the changes on each node to ensure they are applied across the cluster.

      You must also configure the node to run the data service.

      Getting threshold details
      /opt/enterprise-analytics/bin/mcctl
      get sla
      Result
      {"comment":"Current MCBP SLA configuration",
      "version":1,
      "default":{"slow":"500 ms"}},
      "COMPACT_DB":{"slow":"1800 s"},
      "DELETE_BUCKET":{"slow":"10 s"},
      "SEQNO_PERSISTENCE":{"slow":"30 s"}
      }

      The JSON message returned gives details of the operation being logged and the threshold time that will cause a timing message to be logged.