We host our servers in Google Cloud Platform (GCP), Amazon Web Services (AWS) and other leading cloud data centers where we ensure that there are world-class controlled physical security measures on top of the security layers provided by the data center operator. For example, in such sites, we may operate independent biometric identification systems, cameras, and metal detectors. These data-center partners designs and builds their own data centers, which incorporate multiple layers of physical security protections. Access to these data centers is limited to only a very small fraction of the employees who must use technologies like biometric identification, metal detection, cameras, vehicle barriers, and laser-based intrusion detection systems.
The security in the infrastructure is designed in layers starting from the physical components and data center, to hardware provenance, and then on to secure boot, secure inter-service communication, secured data at rest, protected access to services from the internet and finally, the technologies and people processes we deploy for operational security.
Each server machine in the data center has its own specific identity that can be tied to the hardware root of trust and the software with which the machine booted. This identity is used to authenticate API calls to and from low-level management services on the machine. We ensure that these servers run up-to date versions of their software stacks (including security patches), able to detect and diagnose hardware and software problems, and to remove machines from service if necessary.
Service Identity, Integrity, and Isolation
We use cryptographic authentication and authorization at the application layer for inter-service communication. This provides strong access control at an abstraction level and granularity that administrators and services can naturally understand. We do not rely on internal network segmentation or firewalling as our primary security mechanisms, though we do use ingress and egress filtering at various points in our network to prevent IP spoofing as a further security layer. This approach also helps us to maximize our network’s performance and availability.
Each service that runs on the infrastructure has an associated service account identity. A service is provided cryptographic credentials that it can use to prove its identity when making or receiving remote procedure calls (RPCs) to other services. These identities are used by clients to ensure that they are talking to the correct intended server, and by servers to limit access to methods and data to particular clients.
Most of these requirements are designed to limit the ability of an insider or adversary to make malicious modifications and also provide a forensic trail from a service back to its source. We have a variety of isolation and sand-boxing techniques for protecting a service from other services running on the same machine. And part of these techniques include normal Linux user separation, language and kernel-based sandboxes, and hardware virtualization.
Secure Data Storage
Encryption at Rest
Our infrastructure provides a variety of storage services. The storage services can be configured to use keys from the central key management service to encrypt data before it is written to physical storage. This key management service supports automatic key rotation, provides extensive audit logs, and integrates with the previously mentioned end user permission tickets to link keys to particular end users.
Performing encryption at the application layer allows the infrastructure to isolate itself from potential threats at the lower levels of storage such as malicious disk firmware. That said, the infrastructure also implements additional layers of protection. We enable hardware encryption support in our hard drives and SSDs and meticulously track each drive through its lifecycle.
Before a decommissioned encrypted storage device can physically leave our custody, it is cleaned using a multi-step process that includes two independent verifications. Devices that do not pass this wiping procedure are physically destroyed (e.g. shredded) on-premise.
Deletion of Data
Deletion of data from virtual machines often starts with marking specific data as “scheduled for deletion” rather than actually removing the data entirely. This allows us to recover from unintentional deletions, whether customer-initiated or due to a bug or process error internally. After having been marked as “scheduled for deletion,” the data is deleted in accordance with service-specific policies.
When an end user deletes their entire account, the infrastructure notifies services handling end user data that the account has been deleted. The services can then schedule data associated with the deleted end user account for deletion.
Denial of Service (DoS) Protection
The sheer scale of our infrastructure partner (Google) enables us to simply absorb many DoS attacks. That said, we have multi-tier, multi-layer DoS protections that further reduce the risk of any DoS impact on any service.
After their backbone delivers an external connection to one of our data centers, it passes through several layers of hardware and software load-balancing. These load balancers report information about incoming traffic to a central DoS service running on the infrastructure. When the central DoS service detects that a DoS attack is taking place, it can configure the load balancers to drop or throttle traffic associated with the attack.
At the next layer, the instances also report information about requests that they are receiving to the central DoS service, including application layer information that the load balancers don’t have. The central DoS service can then also configure the instances to drop or throttle attack traffic.
We create infrastructure software securely, we protect our employees’ machines and credentials, and we defend against threats to the infrastructure from both insiders and external actors. We also have automated tools for automatically detecting security bugs including fuzzers, static analysis tools, and web security scanners.
As a final check, we use manual security reviews that range from quick triages for less risky features to in-depth design and implementation reviews for the most risky features. These reviews are conducted by a team that includes experts across web security, cryptography, and operating system security.
In addition, we run a Vulnerability Rewards Program where we pay anyone who is able to discover and inform us of bugs in our infrastructure or applications. We have paid several million dollars in rewards in this program.
Keeping Employee Devices and Credentials Safe
We make a heavy investment in protecting our employees’ devices and credentials from compromise and also in monitoring activity to discover potential compromises or illicit insider activity. This is a critical part of our investment in ensuring that our infrastructure is operated safely. Sophisticated phishing has been a persistent way to target our employees. To guard against this threat we have replaced phishable OTP second factors with mandatory use of U2F-compatible Security Keys for our employee accounts.
We make a large investment in monitoring the client devices that our employees use to operate our infrastructure. We ensure that the operating system images for these client devices are up-to-date with security patches and we control the applications that can be installed.
We aggressively limit and actively monitor the activities of employees who have been granted administrative access to the infrastructure and continually work to eliminate the need for privileged access for particular tasks by providing automation that can accomplish the same tasks in a safe and controlled way. We additionally have systems for scanning user-installed apps, downloads, browser extensions, and content browsed from the web for suitability on corp clients.
We use various tools that integrate host-based signals on individual devices, network-based signals from various monitoring points in the infrastructure, and signals from infrastructure services. Rules and machine intelligence built on top of these pipelines give operational security engineers warnings of possible incidents. Our investigation and incident response teams triage, investigate, and respond to these potential incidents 24 hours a day, 365 days a year.
Anti-Malware and Blacklist Monitoring
We understand that the price of freedom from malware is eternal vigilance and with the integration of a powerful malware scanner to strengthen our multi-vector threat defenses, we automatically find and fix viruses, scripts, malware, back-doors, web-shells, hacker tools, blackhat SEO, phishing pages, and more.
The new malware scanning engine finds and automatically cleanup solution already infected files. For web masters, it means you can rid your websites of infection with a single click.
Our domain reputation checking and blacklist monitoring technology has also been integrated and available to customers at no extra cost. This useful tool checks websites against 60 different blacklists, letting website owners know if their website reputation is at risk.
These above are just a peek to the extent we go to protect you and the applications you have entrusted to our care.
Report Bugs and Request Features with Issue Trackers
Web Hosting Magic investigates all reported vulnerabilities, tracks known issues and maintains an internal bug tracking system where bug fixes takes place. We review every new bug report submitted by users and once a report has been submitted, we will work to validate the reported vulnerability. If additional information is required in order to validate or reproduce the issue, we will work with you to obtain it. When the initial investigation is complete, results will be delivered to you along with a plan for resolution and public disclosure.
When we've fixed a bug in production, we'll indicate this and then we'll close the issue.
If you see an issue that needs our prompt attention, please take a look at these pages: Bug and Vulnerabilities Bounty Program and Open Bounties For Security Vulnerabilities & Bugs, and then let us know.
It is important to point out that in order to protect our customers, we always request that you not post or share any information about a potential vulnerability in any public setting until we have researched, responded to, and addressed the reported vulnerability and informed customers if needed. Also, we respectfully ask that you do not post or share any data belonging to our customers. Addressing a valid reported vulnerability will take time. This will vary based on the severity of the vulnerability and the affected systems.
On a side note, if you suspect that any of our service or resources are being used for suspicious activity, you can report it to the Web Hosting Magic Abuse Team here .