How Bureau Veritas migrated 85% of its applications to the AWS cloud
How Bureau Veritas migrated 85% of its applications to the AWS cloud

[ad_1]

With a strategic five-year plan, Bureau Veritas (BV) set out on its digital transformation project in 2015. Core to that transformation was migration of the vast bulk of its applications and infrastructure to the AWS cloud. Along the way, it faced challenges in migration, day-to-day management, costs and security. We talk to BV’s IT director, Jean-Marc Devos Plancq, about the transition.

Bureau Veritas – or BV to those who know it well – is one of the oldest companies still active anywhere. Formed in 1828 in Belgium, but now headquartered in France, the testing, inspection and certification firm employs 75,000 people in 1,500 offices in 150 countries.

Its original raison d’être was maritime risk assessment, but since then, Bureau Veritas has diversified into sectors that include automotive, railways, public sector infrastructure, transport, supply chain, energy, agro-food and health. With a presence on five continents, the group turned over €4.6m in 2020.

“Acceleration of digitisation was a key pillar of the plan,” says Devos Plancq. “And that pillar comprised two aspects: delivery of digital services to our clients, built on the improvement and digitisation of our internal processes.”

So, the IT director began to look at what was on offer from public cloud providers. “Historically, we had used private datacentres to host our applications,” he says. “Capex had been a major budget component, but we wanted to gain some financial agility and not be tied to amortisation cycles over three, four, five years for different solutions.”

Devos Plancq also points out the time-consuming nature of such procurement cycles. “You have calls for tenders to replace a SAN that will cost hundreds of thousands or even millions of euros,” he says. “Then it takes months to select the vendor and the product, and to deliver and install it. It takes up a lot of time for a large part of the IT department for 18 months just to add storage capacity. We wanted to avoid this inertia.”

Besides the investment and time needed for new deployments on-site, the IT chief wanted to reduce the burden of managing all these elements, as well as networks, patching, and so on. And so the agility promised by IaaS, PaaS, SaaS (infrastructure, platform and software as a service) and the cloud appeared very seductive.

“We wanted to orient ourselves towards what would bring value for customers, whether internal or external, that use the applications we had traditionally hosted,” says Devos Plancq.

At the time, six years ago, Bureau Veritas took the view that AWS was “the most mature provider, with a platform that had the largest number of directly usable services”, he adds.

And so, by February 2021, BV hosted 85% of its applications on AWS – but that transition didn’t happen overnight.

Three phases to cloud transition

“We started with a discovery phase, so we could understand how the cloud worked and how we could integrate it into our application environment,” says the IT director. “Also, we had to prepare support for our teams because they were about to completely change the tools they worked with.”

This exploratory period – lasting 18 to 24 months – saw applications move to the cloud as the opportunity arose. “We moved applications to AWS that were simple to deploy, notably those already delivered by DevSecOps and automated and secured by technologies like Java,” says Devos Plancq.

With time, confidence and knowledge acquired, BV adopted its “cloud-first” approach. “According to this principle, any new applications had to be developed in the cloud, unless it was technically impossible,” he says. That period lasted two more years before the third stage was reached.

“When we believed we had good knowledge of the AWS platform, we decided to migrate all servers for our corporate solutions to the cloud, so we could shut down our on-site infrastructure,” says Devos Plancq.

This phase of migration to the AWS cloud meant moving Oracle and SQL Server databases to Amazon’s RDS database management system (DBMS). But the Bureau Veritas IT teams didn’t settle for a simple “lift and shift”. “We integrated a version upgrade to our databases into their move to the cloud,” says Devos Plancq.

“It was easier to migrate our databases to the cloud than to carry out an update on-site, because that would have required infrastructure changes too. We limited ourselves to creating backup and recovery partitions on the instances that we mounted on AWS, and that was it finished.”

Regression testing was carried out “to make sure everything was working, that there were no connectivity problems”, he adds.

Successful cloud migration needs a few tricks

Migrating a DBMS to Amazon RDS can sometimes bring surprises, but the BV IT teams didn’t have any problems. “The bulk of the functionality in SQL databases is taken care of by RDS,” says Devos Plancq. “But there is some functionality that can’t be taken on by AWS. If you use it, you have to find another solution.”

This was one of those times when outside help was needed, so BV subscribed to the AWS Migration Acceleration Programme (MAP).

Databases managed by RDS communicate via Elastic BeanStalk, which is one of the longest-standing services provided by AWS, and is used heavily by Bureau Veritas. “This PaaS allows deployment of applications and the benefits of automated platform scaling,” says the IT chief. “You manage the environment rather than the servers because the platform manages itself according to the number of users at any one time.”

The IT team administers about 50 applications in this way, out of a total of 115.

“For custom developments, the PaaS allows us to guarantee a level of performance according to the time of day, the number of users connected, but also to optimise our costs when activity is low or non-existent,” says Devos Plancq.

Most of the applications developed by Bureau Veritas are written in Java. Elastic BeanStalk was built with Java in mind and supports frameworks and languages that include .NET, Node.js, PHP, Python, Go and Ruby.

On the other hand, Elastic BeanStalk also requires you to take into account several peculiarities, says Devos Plancq. “It is important that your applications aren’t dependent on user sessions,” he adds. “Because the platform decides which server is active or not, users can lose their progress within a task. So, you have to manage sessions in a shared cache.”

For that, Bureau Veritas uses Amazon’s ElastiCache, which is a service based on in-memory databases Redis and Memcached.

“This requires a little tweaking in the application to externalise user sessions in the cache, but equally it’s important the sessions don’t have a big footprint when serialisation takes place,” says Devos Plancq. “Ideally, you should use stateless applications.”

Cloud-first bears fruit

Devos Plancq is full of praise for how quickly it is possible to develop and deploy solutions via AWS services. He points to the example of Bureau Veritas’s Restart your Business, which provides services to help customers reopen workplaces and spaces to the public after Covid restrictions. The application was developed in 14 days and deployed in 85 countries in “three or four days”, he says.

He points to the number of new services regularly announced by AWS, many of them pushing towards a serverless approach. “We’re taking things to the next level with PaaS with services like Lambda that allow use of milliseconds-worth of compute to execute applications,” he says.

Automation of processing is also on the right track. BV’s IT teams are committed to an infrastructure-as-code approach to deploy and upgrade the technical infrastructure, to upgrade operating systems, applications and other services.

On this subject, Devos Plancq says his teams make use of services that help automate operations and use alerts to warn of the need to deploy another bucket when S3 storage has reached capacity, for example.

“We’re working to connect our systems via APIs [application programming interfaces],” he says. “We use API Gateway to allow local applications to talk to applications across the group, but also to allow access to customers and partners.”

API Gateway is used, for example, in Code’n’go, the highway code learners’ application developed by BV and delivered via driving schools.

Obviously, to adapt all an organisation’s services to such a new way of working demands financial vigilance. Bureau Veritas has gradually shifted to a FinOps approach that makes use of EC2 Savings Plans as well as automation of startup and shutdown of test-and-dev environments used by developers.

After six years, adapting to these services and their limits is part of the daily life of BV’s IT teams, but they also face other difficulties.

“We are often constrained by the technical prerequisites of software packages,” says Devos Plancq. Bureau Veritas uses Documentum for EDM, Sybele for disaster recovery, Tableau for BI and reporting, and SAP for financials.

[ad_2]

Source link

Where to run containers? Bare metal vs virtual machines
Where to run containers? Bare metal vs virtual machines

[ad_1]

Containers are a serious and emerging contender as a method of application delivery. Although they are by no means in use universally yet, most enterprises have deployed containers somewhere or are investigating their capabilities.

Their advantages centre on the ability to abstract everything needed to run applications away from the hardware, and with the potential for many – very many, in fact – container instances to be created and run on demand, they are supremely scalable.

Of course, quite often container clusters and orchestration are run in virtual server environments, but they don’t have to be. They can also directly run on bare metal servers. In this article, we’ll look at bare metal vs virtual machines (VMs) and where to run containers.

Running containers: bare metal or VM?

Containers are a form of virtualisation, but one in which the application and all the microservices needed for its execution run on top of the host server operating system, with only the container runtime engine between the two. Virtualised server environments, meanwhile, see the hypervisor run on the host operating system, with guest OSs on top and applications running in those environments.

Most questions around whether to run containers on VMs or on bare metal derive from this basic fact.

Key decisions: Performance vs convenience, perhaps cost too

The decision whether to deploy container infrastructure centres pivots on performance requirements vs convenience. It’s way more convenient to run container orchestrators and their nodes on VMs, but you’ll lose out somewhat on performance. Having said that, if you want to gain the performance benefits of bare metal, you probably need to be running your own on-prem environment and be prepared to do the work needed to make up for the convenience the hypervisor environment brings.

Also, cost can come into things. Because bare metal servers can run a lightweight Linux OS (such as Core OS or its descendants), they avoid a lot of the cost of hypervisor licensing. Of course, that also means they miss out on advanced functionality available from virtualisation environments.

Benefits and penalties of virtualisation

Putting a virtualisation layer on top of the host OS means adding a layer of software to the environment, which brings benefits as well as penalties.

In a virtualisation environment, the hypervisor brings a lot of functionality and allows for maximised hardware utilisation.

Key benefits here are that workloads can be migrated between hosts easily, even when they don’t share the same underlying host OS. That is a useful thing for containers especially, which are desirable for their portability between locations, but are dependent on the OS they were built in. Use of a particular virtualisation landscape will provide a consistent software environment in which to run containerised applications even if the host OS differs.

But at the same time, all the things about virtualisation that bring benefits also come with penalties. That is rooted in the fact that your physical resources simply have to do more computing because of the added layers of abstraction.

That is most clearly visible in the performance difference between containers that run on bare metal and in virtualised environments. Benchmarking tests carried out by Stratoscale found that containers on bare metal performed 25% to 30% better than in VMs, because of the performance overhead of virtualisation.

Meanwhile, VM environments tend to have resources – such as storage allocated at startup – that remain provisioned to them. Diamanti, which provides a Kubernetes platform aimed at use on bare metal and in the cloud, claims resource utilisation can be as low as 15% in virtualised environments and that it can cut hardware use by 5x.

Despite the inherent performance advantages of bare metal over the added complexity of virtualisation, VMware, with its Tanzu Kubernetes platform, has made efforts to mitigate overheads.

Bare metal downsides

Having said all that, there are downsides to containers on bare metal.

Key among these is that container environments are OS-dependent, so one that is built for Linux will only run on Linux, for example. That will potentially put limits on migration and may work against you in the cloud, where bare metal is limited and where you can find it, it costs more. Given that one of the key advantages of containers is to be able to migrate workloads between on-prem sites and the cloud, that’s not good news.

Bare metal container deployments will also lack features that come with virtualisation software layers, such as rollback and snapshots.

Deploying containers to bare metal can also make it more difficult to mitigate risk via redundancy. VMs allow you to potentially split nodes between them, whereas when container nodes are installed on bare metal, there are likely to be less of them and they will be less portable and less shareable.

[ad_2]

Source link

Oil and gas firm halves backup licence cost with Hycu move
Oil and gas firm halves backup licence cost with Hycu move

[ad_1]

Oil and gas exploration company Summit E&P has cut backup licence costs in half after switching from Veeam to Hycu and migrating from VMware to Nutanix and its native hypervisor, AHV.

Summit, a UK-based subsidiary of the Sumitomo Corporation, has only 17 employees but holds about 250TB of data, which is gathered by geophysical survey boats and exploratory wells.

Projects come as large files (300GB to 400GB) in large datasets in flat files (up to 1TB) that are subject to analysis via high-end workstations.

The infrastructure had comprised NetApp storage, VMware virtualisation and Veeam and Symantec (for physical servers) backup software.

An initial move to Nutanix hyper-converged infrastructure came in 2017 with the deployment of a three-node cluster. “Then, in 2020, we decided Nutanix was the way to go,” said Summit IT and data manager Richard Inwards.

“NetApp had become expensive and difficult to maintain and we got rid of ESX for [the Nutanix] AHV [hypervisor]. We got rid of Veeam because we got rid of ESX, but also because Hycu for Nutanix could back up physical servers.”

He added: “Veeam didn’t backup physical servers very well at the time.”

Summit holds about 200TB of data on-site with some held off-site and streamed to the cloud. It runs 13 virtual machines (VMs) plus four physical servers.

So, Summit deployed Nutanix, with the AHV hypervisor and Hycu backup, which provides incremental backup.

Inwards said licensing costs for Hycu are about half those for Veeam, but the key benefits are in ease of use.

“It’s a much more simple interface,” he said. “And we can now use one product for virtual and physical instead of two. Also, when we moved from ESX to Nutanix, Hycu handled the migration. We backed up from ESX and restored to Nutanix.

“The big benefit of Hycu is that it can do what Veeam couldn’t do, which is to integrate well with the Nutanix environment.”

Summit backs up about 50GB to 60GB per day via Hycu.

Hycu was spun off from the Comtrade Group into its own company in 2018. It offers backup software tailored to Nutanix and VMware virtualisation environmments as well as Google Cloud, Azure and Office 365 cloud workloads. It also offers a product aimed at Kubernetes backup.

[ad_2]

Source link

Valence brings storage virtualisation for the cloud era
Valence brings storage virtualisation for the cloud era

[ad_1]

Startup 22dot6 has launched its Valence storage virtualisation platform, with bold claims that it can transcend all existing third-party storage to provide a single view of all an organisation’s data on any media, from high-performance flash storage to Glacier-like cold storage in the cloud and even off-line tape.

The launch centres on 22dot6’s software-defined Transcendent Abstracted Storage System (Tass) architecture and its Valence software. The claim is that Valence allows enterprises to access, move and manage data assets transparently no matter the storage resource on which they reside, like a storage virtualisation product.

Valence is multi-protocol (SMB, NFS, S3) access and also offers advanced storage services including snapshots, replication, migration and cloning. It has CSI drivers to provide persistent storage for containers.

Valence is a Linux-based software product that can be deployed on bare metal or in a virtual machine environment. Key advantages cited are the ability to access stored data in any location to provide greater availability, by unifying an organisation’s infrastructure globally and to allow for zero-impact migration and decommissioning.

The product looks remarkably like the storage virtualisation products that were commonplace around a decade ago. Examples that are still in existence are IBM’s Spectrum Virtualise (formerly SAN Volume Controller) and DataCore SANSymphony. 22dot6 would claim its products go way beyond these in capability, with the addition of the ability to abstract cloud and offline data sources too.

Valence comes as two different types of node that handle metadata from the stored data (VSR nodes) and data services (DSX nodes) while the data remains on the existing storage. VSR and DSX can be built into clusters of multiple nodes in an architecture that CEO Diamond Lauffin – formerly of Storbyte and Nexsan – said provides huge performance advantages over existing storage array products.

“Look at Pure and Nimble. The idea of active-active dual controllers is a 1998 architecture,” said Lauffin. “It’s effectively active-passive, because it only allows you to use one controller at a time. And the truth is, when did a RAID controller last fail? I speak to about 75 companies a week and I’ve not heard of any that had this happen to them in the last five years. Why be limited by two-controller access where the user can’t use 50% of the throughput?”

Valence provides a single global namespace with “file-granular capability”. Access isn’t limited to volumes or LUNs and is based on user-defined policies that aggregate backend storage into tiers based on performance and capacity requirements.

Nodes can be user-selected commodity hardware or pre-configured by 22dot6. Pricing depends on the class of storage and capacity. “There would be a delta of 5x to 10x between the most and least performant storage,” said Lauffin.

In the Tass schema all sites can be considered as active sites, sharing data in real time to any other site, and all sites can act as a primary site.

According to Lauffin the scale-up and scale-out nature of the architecture allows for throughput of up to 1,200GBps with up to 60 nodes per location.

Valence supports off-line data management for archives like Amazon Glacier and tape.

Tass separates metadata management, data analytics/profiling and data services from the processes of providing IOPS and managing throughput. Valence assigns these different tasks to dedicated CPU and RAM resources across different nodes. Valence monitors performance in real time with predictive analysis and reporting to guarantee user defined objectives – including read/write bandwidth, IOPS, and latency.

Data protection can be configured at an application, user and sub-file level. Meanwhile, Valence can be configured for multi-tenancy with potentially thousands of independent customers or internal departments isolated from each other through a unified, multi-location management console.

[ad_2]

Source link

MP-backed push to stop tech giants claiming super-deduction tax relief thwarted
MP-backed push to stop tech giants claiming super-deduction tax relief thwarted

[ad_1]

A push by Labour MPs to block multinational tech giants from claiming tax relief through the government’s “super-deduction” policy has failed, despite concerns that the system could be used by tech firms such as Amazon to further minimise the amount of corporation tax they pay in the UK.

MPs were called to vote on a series of proposed amendments to the forthcoming Finance Bill 2019-2021. Among them was a proposal that sought to preclude tech firms in-scope of the government’s digital services tax policy from making capital allowance claims through the super-deduction system.

The amendment, tabled by Labour leader Keir Starmer with the support of five other Labour MPs, failed to receive the number of votes required to action the proposal during the vote on Monday 24 May 2021.

This means tech firms that are liable to pay the digital services tax will still be able to use the super-deduction to claim tax relief on plants and machinery purchases, despite mounting concerns that this could offer the likes of Amazon a means to markedly minimise the amount of tax they pay in the UK.

“As the Bill stands, the [super-deduction] will finish the job Amazon started, wiping out the last bit of tax it had to pay on the few parts of its business, the profits of which it has been unable to shift overseas,” said Labour MP James Murray during the House of Commons debate ahead of Monday’s vote.

“A vote in favour of our amendment would stop Amazon and a small number of similar firms benefiting from a giveaway of public money – public money that could be better spent for so many purposes, including to support British businesses that have been struggling throughout the past year.”

Why stop tech firms using the super-deduction?

Announced in the March 2021 Budget, the super-deduction has been described by chancellor Rishi Sunak as the “biggest two-year business tax cut in modern British history” which the government claims will unlock £20bn a year in investment during the policy’s lifetime.

It is one of a number of different policies set out in the Budget to stimulate the UK’s post-pandemic economic recovery, with the super-deduction specifically focused on providing companies with financial incentives to invest in the “productivity-enhancing” plant and machinery assets they need to help their businesses grow.

The policy, which runs from April 2021 to March 2023, will achieve this by allowing firms to deduct 130% of the cost of any qualifying plant and machinery investments from their taxable profits, and make use of a 50% first-year allowance for any qualifying special rate assets.

According to the government’s own figures, this means qualifying companies can cut their tax bills by up to 25p for every £1 they invest, leaving them with more money to reinvest in their own business growth plans.

However, concerns have been raised since the policy was announced about the potential for it to be used by multinational tech firms that process their UK sales through overseas subsidiaries to minimise they amount of tax they pay in this country.

Speaking to Computer Weekly, Murray said this was precisely the type of behaviour the defeated amendment was intended to curb. “It is unacceptable that, for many years, multinational tech giants have been shifting their profits overseas while other businesses pay their fair share here in Britain,” he said.

“It cannot be right for the government to give those same large multinationals a further tax write-off, and so we attempted to prevent public money from being spent on a ‘super-deduction’ for the biggest tech firms.

“More widely, the government should be taking clear steps to curb tax avoidance by large multinationals and to level the playing field to stop British businesses being undercut.”

Online retail giant Amazon has frequently been cited in these discussions as an example of a firm whose operations falls into the category outlined by Murray. For example, its UK sales are processed through a subsidiary in the renowned tax haven of Luxembourg, while its plant and machinery investments are made through Amazon UK Services, which provides warehousing and delivery services for its UK operations.

According to George Turner, director of investigative think-tank TaxWatch, the super-deduction could prove hugely beneficial for Amazon’s UK tax affairs if the company took advantage of it.

“Amazon do have a lot of infrastructure in their delivery network and they’re growing a lot, and during the pandemic they hugely benefited from restrictions that were put in place to deal with a pandemic,” Turner told Computer Weekly.

“They pay very little tax in the UK as it is, although they do pay a little bit of tax, but their tax bill will be entirely wiped out by the super-deduction.”

According to figures pulled up by TaxWatch’s research team, Amazon UK Services made a pre-tax profit of £102m in 2019 and had a corporation tax liability of £6.3m, while the company’s own accounts show it spent £66.8m on plant and machinery, £80.4m on office equipment and £15.3m on compute equipment during the same year.

“If expensed at 130% [as per the terms of the super-deduction], this would entirely wipe out the taxable profits of the company before any deductions for staff pay awards,” said TaxWatch in its Amazon tax cut report, published post-Budget.

Upset in the chamber

The TaxWatch report has since been cited regularly by Labour MPs during Finance Bill-related House of Commons debates over the last couple of months, as they have echoed Turner’s sentiments that it is firms like Amazon that stand to benefit most from the super-deduction policy.

Margaret Hodge has repeatedly spoken in the House of Commons about her misgivings about the super-deduction, while voicing support for amendments that also sought to ban multinationals with a history of corporate tax avoidance from accessing the super-deduction. This amendment was not put to the vote.

“These companies refuse to contribute to the common pot, yet they are about to be gifted – by us, from that very same pot – a hugely generous tax relief [through the super-deduction],” said Hodge during the debate ahead of the vote on 24 May.

“These companies need the public services that taxes buy, from improved connectivity to transport infrastructure, from the education of their workforce to investment in the NHS to keep their workers healthy. However, they persist in deliberately not paying their fair share of corporation tax.

“These companies can undercut and destroy our high streets and community businesses. They exploit the price advantage that they gain from avoiding the corporation tax that they should be paying, yet the government is about to bestow on them the largest bonanza for big business in modern times.”

Computer Weekly contacted Hodge, who chairs the Anti-Corruption and Responsible Tax All-Party Parliamentary Group (APPG), for her reaction to Monday’s votes, and she echoed the dismay displayed during previous debates on this topic.

“Huge companies that use artificial corporate structures to shift their profits abroad and avoid paying tax in the UK should not be able to access generous tax reliefs,” she said. “That is why I have campaigned for the biggest multinationals – especially big tech firms like Amazon or Google – to be barred from accessing the government’s overly generous super-deduction capital allowance.

“The government should spend more time backing British SMEs and our much-loved high-street brands instead of dishing out cash to huge multinationals.”

During a Finance Bill debate in the House of Commons on 19 April 2021, Hodge expanded on her misgivings about the policy, particularly with regard to how little time companies without “over-ready capital investment plans” will have to tap into it.

“The tax relief will last for only two years, so it is unlikely to fund the aviation industry or genuinely new capital investment, which takes time to plan and to implement,” she said.

“It will mainly be used to cut taxes for companies that were investing anyway, and those that will benefit most are those that have proposed most during the pandemic. They are the companies with oven-ready capital investment plans, benefiting from the increased demand they have enjoyed over the last torrid year.”

As previously reported by Computer Weekly, Amazon has seen its profit and revenue soar over the course of the pandemic, as stay-at-home instructions across the globe resulted in a surge in demand for online orders and deliveries.

This has resulted in the firm embarking on a series of hiring sprees in the various countries where it operates, including the UK, as well as making investments in building out the underlying infrastructure needed in its delivery and logistics network to accommodate this demand.

During Amazon’s most recent set of financial results, company CFO Brian Olsavsky confirmed that these investments would continue for the foreseeable future.

Computer Weekly contacted Amazon UK Services for comment on this story, and received the following statement from a spokesman in response: “We are proud to be investing heavily and creating good jobs right across the UK. Since 2010, we’ve invested more than £23bn in the UK, creating an estimated £45bn in value-added GDP.

“The UK has now become one of Amazon’s largest global hubs for talent and earlier this month we announced plans to create 10,000 new jobs in the country by the end of 2021, taking our total workforce to over 55,000. This continued investment helped contribute to a total tax contribution of £1.1bn during 2019 – £293m in direct taxes and £854m in indirect taxes.”

[ad_2]

Source link