Security & Vendor Risk •

Employee Monitoring Vendor Security Assessment: 40 Questions to Ask Before You Sign

Employee monitoring software processes some of the most sensitive data your organization handles: screenshots of employee screens, keystroke activity, application usage, behavioral patterns, and in some configurations, audio recordings. Before you sign a contract with any monitoring vendor, you need their security answers in writing. This guide provides 40 questions organized across eight security domains — and explains what the right answer looks like for each one.

An employee monitoring vendor security assessment is a structured due diligence process that evaluates a monitoring software vendor's data security practices, certification status, data handling policies, and contractual protections before an organization deploys the vendor's software on employee devices. Monitoring software occupies a uniquely sensitive position in organizational security because it operates at the endpoint level, processes behavioral data about individual employees, captures visual and text-based data from employee screens, and transmits all of this data to vendor-managed cloud infrastructure. Organizations evaluating their tooling stack should also review the distinctions covered in employee monitoring vs EDR to understand how these security tool categories differ and overlap. A monitoring vendor with weak security practices is not just a compliance risk. It is a data breach risk with a very large blast radius covering every employee in your organization.

This assessment framework is organized into eight domains that cover the full lifecycle of monitoring data: from the moment it is collected at the endpoint through transmission, storage, access control, breach response, AI data use, and contract exit. Work through each domain with any vendor you are seriously evaluating. Vendors who cannot respond to all 40 questions with documented, verifiable answers represent an elevated vendor risk that should factor into your selection decision.

Domain 1: Data Encryption (5 Questions)

Encryption is the foundation of monitoring data security. Employee behavioral data, screenshots, and activity logs must be encrypted both during transmission and while stored on vendor infrastructure. Encryption standards have evolved significantly since 2020, and vendors still citing older standards deserve scrutiny.

  1. What encryption standard is used for data in transit from the endpoint agent to your servers? The acceptable answer is TLS 1.2 minimum, with TLS 1.3 preferred. Any vendor using TLS 1.0 or 1.1 (which were deprecated in 2021) represents an unacceptable security posture.
  2. What encryption standard is used for data at rest (screenshots, activity logs, recordings)? The acceptable answer is AES-256. AES-128 is technically acceptable but represents a lower standard than modern best practice for this category of sensitive data.
  3. Are encryption keys managed by your organization or by the vendor? Customer-managed encryption keys (CMEK) provide the highest security, as the vendor cannot access plaintext data without your keys. Many monitoring vendors do not yet offer CMEK. Understand what key management model applies and what access it implies for the vendor's own team.
  4. What happens to encryption keys when a customer terminates the contract? Encryption key destruction should occur within 30 days of contract termination. Any vendor who cannot commit to a specific key destruction timeline creates ongoing data access risk post-termination.
  5. Is data encrypted during processing (in-use encryption) or only at rest and in transit? In-use encryption is an emerging standard that few vendors currently meet, but the question reveals the vendor's security investment roadmap and their awareness of state-of-the-art practices.

Domain 2: Security Certifications (6 Questions)

Security certifications are third-party verification that a vendor's security practices meet defined standards. Certifications claimed without documentation are marketing, not security. Always ask for the actual audit reports.

  1. Does the vendor hold SOC 2 Type II certification? SOC 2 Type II is the minimum acceptable certification for a cloud-based monitoring vendor. Type I (which tests controls at a single point in time) is insufficient. Type II tests that controls operated effectively over a period of at least six consecutive months.
  2. What is the date of the most recent SOC 2 Type II audit, and can you provide the full audit report under NDA? Certifications more than 18 months old represent a gap in ongoing assurance. Vendors who refuse to share the full report (not just the summary) under NDA warrant skepticism.
  3. Does the vendor hold ISO 27001 certification? ISO 27001 is the international standard for information security management systems. It is more prescriptive than SOC 2 and provides additional assurance for EU and international buyers. Ask for the certificate and its validity date.
  4. Does the vendor undergo penetration testing? How frequently, and who conducts it? Annual penetration testing by an independent third-party security firm is the standard. Vendors who rely on internal testing only, or who have not conducted a pen test in the past 12 months, represent a higher risk profile.
  5. Does the vendor hold HIPAA Business Associate Agreement (BAA) capability? Organizations in healthcare or with healthcare clients need BAA coverage for any vendor that may process protected health information. Monitoring software on endpoints in healthcare environments can capture screen data containing PHI.
  6. For vendors serving financial services: does the vendor hold relevant financial industry compliance certifications? SOC 1 Type II (formerly SAS 70) is relevant for financial organizations with financial reporting implications. PCI DSS compliance matters if monitoring data is captured in payment card environments.

Domain 3: Data Access Controls (6 Questions)

Who can see your employee monitoring data inside the vendor's organization is as important as who can see it inside yours. Vendor employee access to customer monitoring data is a significant and often under-examined risk.

  1. Which vendor employees can access customer monitoring data (screenshots, activity logs, recordings) in production systems? The acceptable answer names a specific, minimal set of roles (e.g., tier-3 support, security incident response, infrastructure operations) and describes the access approval process. Broad access by customer success, sales, or marketing personnel is a red flag.
  2. Is vendor employee access to customer data logged and auditable? All access to customer data by vendor employees should be logged with user identity, timestamp, and action. These logs should be available to customers on request and retained for a minimum of 12 months.
  3. Does the vendor implement role-based access control for customer administrators? Enterprise monitoring deployments require granular RBAC so that department managers see only their team's data, HR administrators have different access than department managers, and executive-level views are controlled separately from operational views.
  4. What multi-factor authentication requirements apply to vendor employee access to production data systems? MFA should be required for all vendor employee access to any system containing customer data. Hardware tokens or authenticator app MFA are preferable to SMS-based MFA.
  5. Does the vendor have a formal privileged access management (PAM) program? PAM controls and audits access by accounts with elevated privileges. The absence of a PAM program indicates that administrative access to production systems containing customer data is insufficiently controlled.
  6. What is the vendor's employee background check policy for personnel with access to customer data? Background checks are standard for roles with production data access. Ask specifically whether background checks are required before granting production access or only as a pre-hiring step.

Domain 4: Data Residency and Transfer Controls (5 Questions)

Where your employee monitoring data is stored and transmitted determines which legal frameworks apply to it, what government access rights exist, and what GDPR transfer mechanism is required for EU data.

  1. Where is monitoring data stored? Specify the cloud provider, region, and country for primary and backup storage. EU customers need EU-region data storage under GDPR. U.S. federal contractors may have specific data residency requirements. Healthcare organizations may need U.S.-only storage for HIPAA purposes.
  2. Can customers specify data residency requirements at contract time? Enterprise buyers increasingly require contractual data residency guarantees. A vendor who cannot offer contractual data residency commitments may not be suitable for regulated industries or multi-national organizations.
  3. What is the vendor's legal mechanism for EU-to-U.S. data transfers? Standard Contractual Clauses (SCCs, updated 2021 version) are the standard mechanism. Any vendor relying on Privacy Shield (invalidated by Schrems II in 2020) demonstrates a compliance gap. Binding Corporate Rules are acceptable for intra-group transfers.
  4. Does the vendor receive government access requests for customer data? What is the process for handling them? Vendors should have a documented process for handling government data requests, including notifying customers when legally permitted to do so, challenging overbroad requests, and publishing transparency reports disclosing request volumes.
  5. What is the vendor's data transfer and export encryption standard for data moved between systems? Data in transit between internal vendor systems (e.g., from collection infrastructure to analytics systems) should be encrypted with the same or higher standard as customer-facing transmission.

Domain 5: Incident Response and Breach Notification (5 Questions)

Security incidents involving monitoring data are among the highest-impact breaches an organization can experience, because the data captured is intimate, high-volume, and spans the entire workforce. Vendor incident response quality determines how much damage a breach causes.

  1. What is the vendor's breach detection and response timeline? How quickly are incidents contained after detection? Industry standard is containment within 24 hours of detection. Vendors who cannot articulate a specific detection-to-containment timeline represent a gap in incident response planning.
  2. What is the contractual breach notification timeline to customers? GDPR requires notification to supervisory authorities within 72 hours of a breach. Contractual notification to customers should occur at or before this timeline. Any vendor offering notification timelines greater than 72 hours is non-compliant with GDPR expectations.
  3. What does the vendor's breach notification include? Breach notifications should include: the categories of data compromised, the approximate number of affected employee records, the suspected or confirmed attack vector, containment actions taken, and the vendor's forensic investigation process. Vague notifications that do not specify data categories are inadequate for customer response planning.
  4. Has the vendor experienced any security incidents or data breaches in the past 24 months? Ask directly. Vendors who have experienced incidents and handled them well demonstrate mature security practices. Vendors who are evasive about past incidents despite public reports raise serious due diligence concerns.
  5. Does the vendor carry cyber liability insurance? What are the coverage limits? Cyber liability insurance that covers customer data breach losses is a reasonable contractual requirement for monitoring vendors handling large-scale employee data. Coverage limits should be commensurate with the potential scale of harm from a full-workforce data breach.

Domain 6: AI Training Data Use (5 Questions)

This domain is the most frequently neglected in monitoring vendor assessments, and it represents one of the highest-risk areas in 2026. Most monitoring vendors are building AI features. The question is whether they are building those features using your employees' data without consent.

  1. Does the vendor use customer employee data to train, fine-tune, or improve AI or machine learning models? This is the single most important AI question. Any use of customer monitoring data for vendor AI model training — without explicit opt-in consent — represents a serious data governance violation and potential GDPR Article 5 breach.
  2. If the vendor uses customer data for AI training: what is the opt-out process, and is opt-out the default? Data use for AI training should require affirmative opt-in, not opt-out. Any vendor who defaults to using customer data for training and requires customers to actively opt out is applying a data minimization standard that falls below GDPR expectations.
  3. What data is used to train the vendor's AI features? Is it synthetic, anonymized, or raw customer data? Synthetic or genuinely anonymized training data is the appropriate standard for monitoring vendors. Raw or pseudonymized customer data (where re-identification is possible) should not be used for vendor model training without specific consent.
  4. Does the vendor provide customers with a data processing agreement (DPA) that specifies AI training data use restrictions? The DPA is the contractual mechanism for limiting vendor data use. AI training data use restrictions should be explicit in the DPA, not covered by general "service improvement" language that vendors commonly use to justify broad data use.
  5. What third-party AI providers have access to customer monitoring data (e.g., LLM providers, analytics platforms)? Sub-processors with access to monitoring data need to be identified and contractually bound to the same data use restrictions as the primary vendor. Any LLM provider that receives employee monitoring data (even for AI feature processing) is a sub-processor with significant data access implications.

Domain 7: Data Retention, Deletion, and Exit Portability (6 Questions)

Monitoring data accumulates rapidly. A 100-person organization running screenshot monitoring generates tens of thousands of screenshots per week. Understanding how long that data is retained, how it is deleted, and whether you can export it when you leave the vendor is critical for both compliance and operational continuity.

  1. What are the default data retention periods for each data type: screenshots, activity logs, keystroke data, recordings? Different data types carry different sensitivity and have different legal retention requirements. Screenshots and screen recordings are generally the most sensitive and should have the shortest default retention periods consistent with business need.
  2. Can retention periods be configured by customers to match their legal obligations? GDPR requires retention of personal data for no longer than necessary for the specified purpose. Customers must be able to set retention periods shorter than the vendor default if their legal obligations require it.
  3. What is the verified deletion process when data reaches its retention limit? Deletion should be cryptographic or physical erasure, not logical deletion. Ask whether deleted data can be recovered from vendor backups, and if so, for how long backup copies persist after primary deletion.
  4. What data export functionality does the vendor provide? Customers must be able to export their complete monitoring data in machine-readable format before contract termination. Vendors who only offer summary reports and do not provide raw data export are creating lock-in that limits customer data sovereignty.
  5. What happens to customer data after contract termination? What is the deletion timeline? Contractual commitment to data deletion within 30 days of contract termination is standard. The deletion certificate (formal documentation that all data has been deleted) should be provided to the customer at no additional cost.
  6. Does the vendor retain any anonymized or aggregated customer data after contract termination? Some vendors retain aggregated data from customer deployments for benchmarking or industry reporting purposes. Customers should know what data, if any, persists post-termination and in what form.

Domain 8: Contractual Protections and Liability (6 Questions)

Security assessments are only as valuable as the contractual protections they inform. Technical controls and certifications represent the vendor's current posture. Contractual protections create enforceable obligations and allocate liability if that posture fails.

  1. Does the vendor indemnify customers for breaches resulting from vendor negligence? Vendor indemnification for breaches caused by vendor security failures is a reasonable contractual requirement, particularly for organizations deploying monitoring at scale.
  2. What are the liability caps in the vendor's standard contract, and are they adequate relative to potential breach harm? Standard SaaS liability caps (often capped at 12 months of fees paid) may be inadequate for monitoring vendors handling large employee populations. Negotiate liability caps appropriate to the actual risk profile — including potential regulatory fines and employee notification costs in the event of a breach.

How eMonitor Answers These 40 Questions

Publishing this framework positions eMonitor as the monitoring vendor willing to be evaluated by the same standard we recommend applying to all vendors. Here is where eMonitor stands on the key domains in this assessment.

Encryption: eMonitor uses AES-256 encryption for data at rest and TLS 1.3 for data in transit. All monitoring data transmitted from endpoint agents uses encrypted channels. Full encryption documentation is available at /security.

Data collection scope: eMonitor collects monitoring data only during configured work hours. Off-hours data collection is disabled by default and cannot be enabled for personal devices. This design limits the scope of data at risk in the event of a breach and satisfies the data minimization principle under GDPR Article 5(1)(c).

AI training data use: eMonitor does not use customer employee data to train AI models. Customer data is processed only to deliver the contracted monitoring service. This commitment is documented in eMonitor's Data Processing Agreement and is not subject to opt-out because opt-out should never be necessary for a data practice that should not occur without explicit consent.

Access controls: eMonitor's role-based access controls ensure that monitoring data is visible only to the managers and administrators designated by the customer. All admin access to monitoring data is logged with tamper-proof audit trails. Employee-facing dashboards give individual workers visibility into their own data, reducing the asymmetric information risk that creates both compliance exposure and employee relations problems.

For the complete security documentation, the eMonitor security page covers encryption standards, certification status, incident response procedures, and data handling commitments in detail. The employee monitoring buyer's guide covers the broader selection framework for evaluating monitoring software beyond security, and the vendor security checklist provides a downloadable version of this assessment for your internal procurement process.

Organizations with specific compliance requirements, including SOC 2 alignment, HIPAA BAA, or GDPR Article 28 processor agreements, can request these documents directly from eMonitor's compliance team through the security page or by contacting support before beginning a trial deployment.

Run eMonitor Through Your Vendor Security Assessment

We publish our security practices, answer all 40 questions in our documentation, and provide our DPA on request. No evasive answers. Start your trial or request documentation.

Start Free Trial

Frequently Asked Questions

What security questions should I ask a monitoring vendor?

The minimum security questions for a monitoring vendor assessment cover: data encryption standards in transit and at rest, SOC 2 Type II certification status and the most recent audit report date, incident response procedures and breach notification timelines, data residency and cross-border transfer controls, employee data access logs and admin activity auditing, AI training data use policies, data export and exit portability provisions, and sub-processor identification. Vendors who cannot answer these questions within 48 hours represent a vendor risk in themselves.

What certifications should an employee monitoring vendor have?

An employee monitoring vendor handling sensitive employee behavioral data should hold at minimum: SOC 2 Type II certification (not just Type I), ISO 27001 certification, and GDPR compliance documentation for vendors processing EU employee data. Vendors serving regulated industries should also hold HIPAA BAA capability for healthcare organizations and relevant financial services compliance certifications for banking clients. Always ask for the actual certification documents and their validity dates, not just claims in marketing materials.

How do I assess a monitoring vendor's data privacy practices?

Assessing a monitoring vendor's data privacy practices requires reviewing their published privacy policy for employee data specifically, their data processing agreement (DPA) for GDPR compliance, their sub-processor list and update notification process, their data retention and deletion policies, their breach notification timeline commitments, and whether employee data is used to train AI models. The DPA and sub-processor list are non-negotiable documents that any GDPR-compliant vendor should provide on request without hesitation.

What are the biggest security risks of employee monitoring software?

The biggest security risks of employee monitoring software are: unauthorized access to screenshot and screen recording data by vendor employees or through vendor system compromise, use of employee behavioral data to train AI models without organizational consent, inadequate sub-processor security, insufficient data residency controls exposing data to foreign government access, and vendor lock-in that prevents data export when you terminate the contract. Screenshot and screen recording data is particularly sensitive because it may contain credentials, financial data, and confidential communications.

Does eMonitor publish its security practices and certifications?

eMonitor publishes its security practices, data handling policies, and encryption standards in the security documentation available at employee-monitoring.net/security. The platform uses 256-bit AES encryption for data at rest and TLS 1.3 for data in transit. Monitoring data collection is limited to configured work hours only. eMonitor does not use customer employee data to train AI models. Role-based access controls limit data visibility to authorized managers and administrators, and all admin access is logged with tamper-proof audit trails.

Can I request a vendor's SOC 2 Type II report before signing?

Yes, and you should. Any reputable SaaS vendor handling sensitive employee data will provide their SOC 2 Type II report under an NDA to qualified prospective customers. If a vendor refuses to share the report, provides only a summary rather than the full audit, or their most recent audit is older than 18 months, treat this as a significant due diligence concern. SOC 2 Type II reports detail which controls were tested, the testing period, and any exceptions found, which is far more informative than the certificate alone.

What should a data processing agreement (DPA) include for monitoring software?

A monitoring software DPA should include: the categories of personal data processed (behavioral data, screenshots, keystroke data, location data), the purpose and legal basis for processing, retention and deletion timelines for each data type, sub-processor list and approval process for changes, security obligations and breach notification timelines, restrictions on use of customer data for AI training or vendor benchmarking, data export and portability rights, and deletion certification upon contract termination. Any DPA that does not address AI training data use specifically is incomplete for 2026 procurement standards.

How do I evaluate a monitoring vendor's sub-processor security?

Evaluating sub-processor security requires the vendor to provide a complete sub-processor list identifying all third parties that access or process customer monitoring data, the categories of data each sub-processor handles, and the contractual protections binding each sub-processor. For EU data, all sub-processors must have appropriate GDPR transfer mechanisms in place. Ask specifically whether any AI or LLM providers are sub-processors, and if so, what data is shared with them and what contractual restrictions apply to their use of that data.

The Monitoring Vendor That Answers Every Question

eMonitor publishes security documentation, provides DPAs on request, and does not use your employee data to train AI models. Trusted by 1,000+ companies.

Start Your Free Trial