Contact

Custom Data Backup and Redundancy Solutions

Companies face a challenge when it comes to reliable data backup and redundancy as off-the-shelf tools often fall short. Blueberry designs custom backup solutions for Windows and Linux servers, tailored to your infrastructure and optimised for your data. We can create an effective and cost-efficient strategy for even the most complex systems.

expertise-custom software development_2

Data Backup and Redundancy

As data volumes grow and systems become more complex, off-the-shelf backup tools often fall short of expectations. Companies need to know that their data is stored securely and can be recovered quickly – using the minimum of space for the maximum benefit.

Blueberry Consultants has strong expertise in the design of custom backup solutions for Windows and Linux servers. We can create database backup and database redundancy systems that are tailored to your infrastructure and optimised to suit your data. Assessing your specific backup issues from the outset, we can design a solution that works optimally with your IT systems and strikes the right balance between reliability, accessibility, security and cost.

Introduction - Database Backup and Database Redundancy

IT systems are now highly complex, and data volumes continue to grow rapidly, making reliable database backup and redundancy a significant challenge for many companies. Users are more mobile, and organisations often operate a diverse mix of systems and platforms. With data growth rates remaining high, many traditional backup systems are under pressure, while achieving high levels of redundancy can be costly if solutions exceed actual requirements.

Blueberry Consultants understands that every customer is different, both in terms of architecture and budget. We consider everything from simple off-the-shelf tools to fully custom solutions, always focused on increasing capacity, ensuring reliability, and reducing costs.

We have deep expertise across modern infrastructure and database environments, including:

  • Cloud and on-premise databases – supporting traditional SQL Server and Oracle installations as well as cloud-native databases such as Azure SQL, AWS RDS, and Google Cloud SQL.
  • Virtualisation and containerisation – from VMware to Kubernetes, serverless computing, and open-source alternatives like Proxmox, helping customers optimise resource usage and reduce reliance on legacy setups.
  • Security infrastructure – including Cisco firewalls as well as next-generation and cloud-native security solutions (Palo Alto, Fortinet, AWS Network Firewall, Azure Firewall) and Zero Trust architectures.

This wide-ranging expertise is concentrated in a tight-knit team, allowing us to design and implement effective backup strategies and redundancy plans tailored to even the most complex and modern IT infrastructures.

Backup Scenarios

Of course, many large organisations spend considerable amounts of money on backup tools. In many cases, these products and systems work perfectly well. In what situations might Blueberry’s skills deliver business benefit?

 

ScenarioExplanation

Optimisation of backup strategy to suit the data using custom compression and incremental transfer approaches.

Many standard database backup systems are quite simple in approach – they effectively just copy files. But many file sets contain redundancy which can be exploited to dramatically reduce backup sizes and time to copy.

Handling special cases – e.g. backup of extremely large files over unreliable links.

Again, standard products may fail to handle extreme cases. We can design solutions to handle all possible cases.

Off-site backup or replication of databases.

Conventional database backup strategies tend to make daily backups. For MS SQL server, we can use transaction log shipping to achieve offsite backup with 15-minute resolution.

Integration of virtualisation technologies with backup systems.

Modern backup systems now leverage policy-based automation, allowing backups to be configured at the virtual machine or application level. This approach reduces the risk of human error and eliminates the need for manual file selection, ensuring that all critical data is reliably protected without constant intervention.

Special requirements can often be met at low cost using off-the-shelf free tools.

Using only free Linux tools, it’s possible to create a simple backup system that gives users easy access to versions of their files from previous days – without needing to make separate copies. In addition, Linux can take advantage of copy-on-write filesystems such as Btrfs and ZFS, which store snapshots of odl data as new data is written, and any common data between the old and new is shared, thereby making efficient use of storage.

 

 

Backup vs Redundancy vs Archiving

The technological overlap between database backup, database redundancy, and archiving can often lead to confusion, but each plays a distinct role in safeguarding and managing data.

Backups create point-in-time copies of data, ideally keeping multiple historic versions. Modern backup systems increasingly use policy-based automation and hybrid cloud or SaaS-based solutions (e.g., Veeam, Rubrik, Cohesity) rather than relying solely on traditional on-premises appliances.

Redundancy establishes additional copies of systems or data that can take over if the original fails. While simple "straight copy" methods like RAID or manual replication are still used, they are now often supplemented by auto-failover distributed systems. Active-active redundancy, where multiple systems serve traffic simultaneously, has become more common than passive standby copies, improving availability and minimising downtime.

Archiving focuses on retaining selected data for the long term. Modern compliance-driven archiving often uses immutable storage solutions (e.g., AWS S3 Object Lock, Azure Blob Immutable Storage) to protect records from tampering, including ransomware attacks. While archiving complements backup and redundancy, it is not a substitute for them.

Most modern strategies combine backup, redundancy, and archiving in a coordinated approach. Tiered planning is key: not all data is equally critical, so prioritising the restoration of essential applications first helps reduce downtime and optimise storage costs.

 

Backup Type

Details
Full

A full backup copies every file in a system. Restore times are fast, but backups are time-consuming and space-intensive, so scheduling and data prioritization remain important considerations. Modern systems may also leverage cloud object storage (e.g., AWS S3, Azure Blob) for scalable and cost-effective full backups.

Differential / Incremental

Differential and incremental backups fill in the gaps between full backups, storing only changes to data. They require a fraction of server CPU, bandwidth, and storage. The risk of data loss is higher than full backups, and restore times can be slower. Snapshots have become more efficient, with cloud providers offering Instant Recovery Snapshots, EBS Snapshots Archive, and multi-cloud snapshot management tools to accelerate recovery.

Synthetic

The standalone concept of synthetic backups is fading. Modern systems implement incremental-forever strategies with cloud-based synthetic fulls (e.g., Veeam Synthetic Full, AWS Backup automatic consolidation) using object storage. This reduces storage overhead and eliminates repeated full backups while providing rapid restores.

Continuous Data Protection

CDP continuously tracks data modifications, enabling near-instant recovery to any point in time. Modern CDP solutions leverage AI-driven deduplication, edge-based CDP, and efficient real-time sync (e.g., Zerto 10+, Druva, AWS Backup near-CDP features). Bandwidth burden is now mitigated by WAN optimization, 5G/6G networks, and smarter delta sync algorithms, making continuous replication practical even for high-volume workloads.

Mirroring

Mirroring creates a direct copy of data across two or more drives or systems. Only new or modified files are copied after the initial mirror, and recovery is rapid because data is not compressed. Active-active mirroring across cloud regions is now increasingly common, providing redundancy alongside high availability.

Tools and Techniques

Rsync

A very commonly used tool for incremental backup of files across the WAN. Most commonly used on Linux, but will work on Windows.

Duplicity

A more specific Linux file backup tool built on top of Rsync technologies. Has superior handling of network faults, and can use less CPU.

MD5 / SHA checksums

Modern checksum algorithms are used extensively in backup and file transfer to identify files and confirm successful transfer. We have access to optimised libraries that perform these calculations particularly quickly.

BB FTE (File Transfer Engine)

A Blueberry-developed library that supports reliable block-based transfer of binary files over HTTP/S, with per-block checksums and strong resume capabilities. FTE is superior to file transports such as FTP and plain HTTP because it detects block level errors and retries at the block level.

MS SQL Server Transaction Log Shipping

This is an MS SQL Server feature which allows efficient continuous backup of SQL server databases. With correct configuration, SQL server will write out a file containing all the changes to a database every 15 minutes. This file can be sent to an offsite server and restored. The backup server does not require an MS SQL licence.

VMWare

A popular virtualisation tool. VMWare allows multiple virtual PCs to run on a single host machine. The relevance for backup is that the virtual machines can be suspended and then copied to a backup server.

DRBD A Linux technology used to allow reliable replication of disk volumes across a LAN. We’ve used this to establish auto-failover for VMWare servers.

Backing Up To The Cloud

Cloud-based services such as Amazon’s EC2 offer a cost-efficient and scalable option for offsite backup. Users can easily access data remotely and businesses can expand their storage requirements as needed. There are still some issues to consider when choosing a solution. Pricing models vary from provider to provider, ranging from tiered pay-as-you-go options to basic flat fees. Some vendors may also charge for additional backup services. Sufficient bandwidth is a crucial consideration, although many providers will only send changed data over the network after the first full backup. Data security is also a valid concern and Service Level Agreements (SLA) should be carefully scrutinized to ensure the proper measures are in place.

The same is true for all backup offerings, particularly when delivered as part of a package by hosting companies or Internet Service Providers. It pays to find out exactly the level of backup on offer. SLAs should stipulate specific levels of data availability and set timeframes for recovery.

Common Database Backup Issues

Live data presents a number of challenges for database backup, particularly in the case of database files that are continuously being written to. Ensuring no changes are lost in the backup process can require considerable configuration. Blueberry can apply a number of optimization techniques to ensure continuous backup of live data.

Backing up Web or database servers can also be tricky. Traditionally, a list of folders is written to backup but this leaves data open to human error. A full backup is the failsafe option but with storage space at a premium, solutions that combine one full backup with subsequent incremental backups offer a cost-efficient alternative. One example is the incremental snapshot system provided with the Cloud-based Amazon EBS service.

Backing up in virtual environments demands a different approach from physical servers. The critical factors are storage availability, configuration and management. If these are properly addressed, the benefits can be considerable. Blueberry has experience in engineering unusual backup systems in virtual environments and mirroring virtual machines from one server to another.

Achieving high levels of redundancy usually requires two servers. This can be costly and companies should consider whether instant redundancy is really vital for their business. With solutions such as Amazon EBS, for example, a new system can be set up from a snapshot in just 30 minutes without the need for two servers. Redundancy of database servers is more complex to configure and usually requires some level of mirroring or replication. Blueberry Consultants leverages a range of tools and techniques to optimize this process. These include custom compression, encryption, data deduplication and incremental transfer approaches.

Testing Backups

Testing is often the missing link in database backup strategies. The time and money invested in backing up data is too great to risk your recovery plan failing at the critical moment. Companies should schedule regular testing of their backup and restore processes. Cloud setups offer the most convenient test environment with the capacity to restore to a second server instance temporarily. Backup reporting tools can also help safeguard your data by tracking backup failures and determining their causes.

Conclusions

These reporting tools look at the whole data protection lifecycle. This type of holistic approach is perhaps the key to optimising backup on an ongoing basis. The big picture is critical. You need to know your business, know your provider and know your limits in terms of budget, bandwidth and storage space before you go looking for a solution. This not only means analysing your data volumes and usage, but also assessing how well your current backup tools are aligned with your business priorities. And don’t let technology stand in the way of these priorities. Whatever works best for your company can be made to work best for your IT systems.

Case Study Example – ABC Ltd

ABC Ltd has a heterogeneous collection of Windows / Linux servers located at a central data center, including a number of systems running VMWare, and some SQL Servers. The company needed to demonstrate to clients that they had a disaster recovery plan in place, and that all key data was backed up offsite.

Blueberry designed a backup plan based on three different data types used within the ABC network – conventional user files, SQL databases and large virtual machine images. A single new server was installed at a remote location, and configured with a Cisco firewall and a dedicated 24mbps DSL line. For the most important user files, a daily sync job was used to replicate the files securely over SSH. For the SQL databases, transaction log shipping was used in conjunction with Blueberry’s FTE system to replicate the databases to a parallel SQL server running on the backup system. For the large VM images, duplicity was configured to run on a slow incremental cycle, establishing backups of 200gb of VM images on a rotating monthly basis.

We're easy to talk to - tell us what you need.

CONTACT US

Don't worry if you don't know about the technical stuff, we will happily discuss your ideas and advise you.

Birmingham:

London: