Loading...

ShipSaving Security

Data security

ShipSaving works diligently to ensure the confidentiality, integrity, and availability of your data.

  • Encryption in transit protects the confidentiality, integrity, and authenticity of your data. As an industry best-practice, we use HTTPS to encrypt customer data sent and received by our servers. This includes, but is not limited to, all data transmitted over the RESTful JSON API that communicates with our backend microservice architecture. This protects your information from tampering and unauthorized disclosure. We also implement AES-GCM (an authenticated encryption algorithm based on a pre-shared key) to protect both the confidentiality and integrity of data transmissions.
  • Database access controls restrict access to authorized users and IP addresses. Developers undergo strict account management. Auditing and robust log records provide relevant insights to personnel who regularly review database access for signs of malicious operation. Database passwords must adhere to internal standards for complexity. Database contents are encrypted both at rest and in transit.
  • Field-level encryption with the AES-GCM algorithm protects sensitive database entries from unauthorized tampering or disclosure. When the application writes data to the database, sensitive fields are encrypted with an additional key. When the contents of an encrypted database field are retrieved, the results cannot be deciphered without the respective key. This second line of defense significantly reduces the potential impact of leaked data. Our software developers and database administrators (DBAs) operate by default on encrypted fields. Access to the original contents of encrypted fields is granted to staff only on a "need-to-know" basis.
  • Data desensitization protects sensitive customer data such as phone numbers, email addresses, and payment card information. Predefined rules tokenize, hide, or mask this data. For extremely sensitive data such as payment card information, we only persist a subset of the original, significantly increasing customer safety.
Access control

Role-based access control (RBAC) assigns functional and data permissions to designated roles, and one or more roles to individual user accounts. A "super administrator," for instance, not only possesses global access to all actions and data, but will also typically customize other roles as needed and assign them to users. Our system administrators apply specific role assignments to customers. This ensures that customers possess functional authority and data authority over only their own data. Malicious actions may attempt to tamper with input parameters to conceal executable commands or cross trust boundaries. This can lead to the unauthorized acquisition of server permissions and the unauthorized disclosure of customer data. The system addresses these concerns as follows:

  • Cross-site scripting (XSS) occurs when untrusted JavaScript or other scripting code is supplied from one source and rendered in an unexpected location, such as another user's browsing session. An XSS attack can tamper with or expose data, opening up customer accounts to compromise and endangering their normal operation. To prevent this class of attack, the system is designed to refuse unsafe inbound content. The system will check, filter, and escape any customer input with whitelists of trusted character sets and HTML tags. As an additional layer of defense, the system will filter and/or sanitize output before rendering HTML content to users.
  • Buffer overflows occur when a system receives input of a length that overflows the memory area designated for its storage. A malicious user who succeeds at this attack may tamper with or expose data, or possibly gain system root access. To avoid receiving data that exceeds system capacity, we treat external data as untrusted input and perform boundary checks before committing to our systems.
  • Cross-site request forgery (CSRF) can occur when an attacker causes a victim's browser to automatically submit a request. This attack is similar to, but different from, a cross-site scripting (XSS) attack. In the case of CSRF, the attacker crafts a malicious request directed at a particular website. If the customer has an active browsing session on that website, the request can be executed to create, read, update, or delete data as the customer themselves. This is equivalent to the attacker logging in with the customer's own credentials to carry out harmful actions within the scope of the customer's authority. To prevent CSRF attacks and protect customer accounts, the system validates CSRF tokens, HTTP referer headers, and CAPTCHA responses.
  • Path traversal can occur when query parameters and server paths are not properly sanitized or validated before use in accessing a server-side file system location. An attacker may supply a string with directory navigation operators, possibly escaped (e.g., "../" or "%2E%2E%2F"), to gain unauthorized access to information systems. To prevent this, all customer input is validated and/or escaped after transmission.
  • A web shell is a type of remote access trojan (RAT) that exploits a script upload vulnerability to create a command execution environment. An attacker will often use this shell environment through a web browser. This can lead to a series of hazards such as unauthorized information disclosure and tampering. To avoid receiving a web shell, we strictly limit and verify uploaded files. Customers may upload only static files and the system does not trust original filenames. In addition to restricting the upload of malicious code files, the system restricts the execution permissions of related directories.
  • Unexpected data disclosure may occur when access logs record sensitive query parameters. To prevent this, the HTTP GET method is not used to transmit sensitive data into the system. When sensitive data must be transmitted into the system, the HTTP POST method is used, and sensitive data placed in the body of the request. The HTTP GET method may be used to transmit non-sensitive data to the system, and to retrieve both sensitive and non-sensitive data from the system.
Login security

Customer password security promotes not only the security of customer information on our platform but also the stability of the system as a whole. Some customers may use the same password on multiple platforms; once this password is cracked, the risk of data leakage exists on any other platforms with which it is shared. We mitigate the potential for damage as follows:

  • The system assigns an authentication token to customers at login. This token accompanies the customer throughout their session and grants the ability to access assigned functions and data. The system reduces the potential for unauthorized disclosure or tampering by requiring a valid token with requests for privileged functions and data.
  • User authentication checks occur with every request. We encourage customers to log out manually when they are finished with the system. As a protection against abandoned sessions, if authentication credentials expire or the authentication process fails, the customer will be required to log in again to minimize the potential for data leakage.
  • Cookies are protected from unauthorized disclosure and tampering with the HttpOnly and Secure attributes. The HttpOnly flag prevents customer information in cookies from being read by malicious JavaScript files. The Secure flag ensures that a client browser will submit the cookie only with HTTPS requests.
  • The AES-GCM encryption algorithm promotes authenticity in the system with authentication codes and authenticated encryption with associated data (AEAD). If system decryption functions note authenticity issues during the decryption process, an exception will be thrown to prevent further processing of corrupted or tampered customer data.
  • Customer password length and complexity requirements mitigate the likelihood of brute-force password cracking. We guarantee that customer passwords are not stored in plaintext. Customer passwords are salted and hashed on the frontend with a slow hash operation. Sensitive data, including authentication credentials, will be transmitted to the backend only over a connection encrypted with HTTPS.
  • Two-factor verification validates customer ownership of email addresses used for website authentication. When we detect a customer account at risk of misappropriation, the affected customer must correctly supply both their password and a one-time code that the system will send to their email address. CAPTCHA affords an additional layer of protection for this process.
Audit logs

We are continuously strengthening our product security controls to ensure the maximum confidentiality, integrity, and availability of customer accounts and data.

  • The vast majority of customer operations trigger the creation of audit logs. This provides customers the ability to trace events when necessary. We provide operation logs for each event that include a timestamp, customer ID, the object being operated on, and the contents of the operation. To facilitate unified management, system administrators have the ability to view the operation logs of all customers.
    • We record operations performed by customers on their own data. If an operation involves sensitive data, such as password or payment card information, metadata attached to log entries will be encrypted to minimize the potential for data leakage.
    • Customer requests for data will be approved or denied based upon credentials and additional security considerations. The results of these decisions will be recorded.
    • Multiple customer user accounts may collaborate as a team. In these cases, a team administrator may be designated as an approver for purchase orders placed by other team members. For example, a warehouse manager who wishes to create a purchase order will submit an application to their administrator for approval. The administrator can then choose to deny or approve the application. If the application is approved, the purchase order will be created. The system records these decisions with per-application granularity.
    • Potentially risky operations trigger alarms to be handled by dedicated personnel.
    • Logs are retained for 180 days.
  • Developers are subject to sound auditing and log management. We mitigate the potential for developer credential misuse by logging the following events:
    • Login and logout timestamps;
    • Database and server access attempts;
    • Data, file, and network access attempts;
    • Changes to customer data, such as deletion or modification of sensitive info;
    • Exceptions or other signs of abnormal activity, which may trigger alarms.
Server security

  • Firewalls and network security policies deny or redirect data flow to reduce the efficacy of malicious network-based attacks. Security rules are optimized to establish a proper security perimeter for system components.
  • Network Access Control (NAC) restricts server access to authorized developers, originating from whitelisted IP addresses, who comply with pertinent security policies.
  • When network bandwidth becomes saturated it can exhaust server processing capacity, preventing legitimate users from being able to access resources. A distributed denial-of-service (DDoS) is one sort of malicious attack that can produce this result. To address this risk, we:
    • Limit the frequency of requests from individual IP addresses;
    • Apply load-balancing to shunt traffic toward available servers;
    • Disable ICMP on security devices such as firewalls;
    • Deny or redirect abnormal traffic with a hardware firewall that mitigates the potential for DDoS with rule-based packet-filtering, deep packet introspection (DPI), and Web Application Firewall (WAF) filtering based on request contents and disposition;
    • Apply ISP near-source cleaning and operator traffic suppression to minimize the potential for incidents that could adversely affect service availability to customers;
    • Strictly limit, at the application layer, the number of connections and CPU usage time from individual IP addresses;
    • Reduce unnecessary dynamic database queries.
  • A competent patch management process regularly reviews and addresses operating system vulnerabilities. Timely application of patches minimizes the potential for malicious attacks.
  • Unnecessary service ports are closed as part of the system hardening process.
  • Audit logs record every server operation. Designated security personnel regularly review these logs and respond immediately when an incident is noted.
Payment security

  • To protect customer financial information, we do not store data from payment cards.
  • A third-party vendor, Stripe, acts as our payment processor. Please refer to Stripe for more information on their security policies and procedures.
Developer security

  • Developers regularly undergo training to improve security and privacy awareness.
  • The use of external storage media (USB disks, portable hard drives, thumb drives, etc.) is prohibited on company computers.
  • The use of unsecured public cloud applications, including file synchronization tools such as Dropbox and Google Drive, is also prohibited on company computers.
Data backup

Our customers fulfill orders with the aid of transactional and other data ("Platform Data") synchronized into our system from third-party e-commerce platforms ("Marketplaces"). To ensure the availability of Platform Data when it is needed, we perform regular backups. Each Marketplace imposes its own policies on the use, protection, and retention of Platform Data.

ShipSaving treats Platform Data, and backups thereof, in strict accordance with the requirements of each Marketplace. We do not maintain Platform Data beyond the retention periods mandated by Marketplaces. We do not sell, share, or otherwise disclose customer data in any manner that would violate our agreements with Marketplaces.

Backup frequency

  • Full database backups occur weekly at 23:59 on Sunday.
  • Incremental database backups occur daily at 03:00, 09:00, 15:00, and 21:00.

Backup plan

  • The database currently utilizes master-slave synchronization to promote data redundancy. Binary logging on the slave database is enabled for off-site backup.
  • Future plans include database read-write separation ("master write, slave read") to promote scalability and high-availability. This master-slave synchronization architecture, with backups in accordance with the original backup scheme, may entail a split into separate databases and additional tables to further improve the ShipSaving recovery time objective (RTO) and recovery point objective (RPO) metrics.
security incident response plan (SIRP)

We maintain a security incident response plan (SIRP) and we review and verify this plan every six months and after any major infrastructure or system change. When our backend engineering managers investigate security incidents, or when automation mechanisms trigger an alarm.

  • We identify the type and extent of any associated events.
  • We disconnect any relevant systems from the network.
  • We gather evidence. System logs may be viewed without shutting down associated servers, which promotes forensic analysis of potential evidence in memory, if required.
  • In the event that a breach may affect passwords, we will lock down sensitive information and rotate any relevant passwords.
  • The impact of a data breach will be assessed based upon logs and other available evidence. Once a root cause analysis is performed, remedial measures will be taken. These may include, but are not limited to, cryptographic erasure of affected systems, patching of relevant software or operating systems, reconfiguration of firewalls, isolation of subnets, malware scanning, and recalibration of automated alarm thresholds.