How Bureau Veritas migrated 85% of its applications to the AWS cloud
How Bureau Veritas migrated 85% of its applications to the AWS cloud


With a strategic five-year plan, Bureau Veritas (BV) set out on its digital transformation project in 2015. Core to that transformation was migration of the vast bulk of its applications and infrastructure to the AWS cloud. Along the way, it faced challenges in migration, day-to-day management, costs and security. We talk to BV’s IT director, Jean-Marc Devos Plancq, about the transition.

Bureau Veritas – or BV to those who know it well – is one of the oldest companies still active anywhere. Formed in 1828 in Belgium, but now headquartered in France, the testing, inspection and certification firm employs 75,000 people in 1,500 offices in 150 countries.

Its original raison d’être was maritime risk assessment, but since then, Bureau Veritas has diversified into sectors that include automotive, railways, public sector infrastructure, transport, supply chain, energy, agro-food and health. With a presence on five continents, the group turned over €4.6m in 2020.

“Acceleration of digitisation was a key pillar of the plan,” says Devos Plancq. “And that pillar comprised two aspects: delivery of digital services to our clients, built on the improvement and digitisation of our internal processes.”

So, the IT director began to look at what was on offer from public cloud providers. “Historically, we had used private datacentres to host our applications,” he says. “Capex had been a major budget component, but we wanted to gain some financial agility and not be tied to amortisation cycles over three, four, five years for different solutions.”

Devos Plancq also points out the time-consuming nature of such procurement cycles. “You have calls for tenders to replace a SAN that will cost hundreds of thousands or even millions of euros,” he says. “Then it takes months to select the vendor and the product, and to deliver and install it. It takes up a lot of time for a large part of the IT department for 18 months just to add storage capacity. We wanted to avoid this inertia.”

Besides the investment and time needed for new deployments on-site, the IT chief wanted to reduce the burden of managing all these elements, as well as networks, patching, and so on. And so the agility promised by IaaS, PaaS, SaaS (infrastructure, platform and software as a service) and the cloud appeared very seductive.

“We wanted to orient ourselves towards what would bring value for customers, whether internal or external, that use the applications we had traditionally hosted,” says Devos Plancq.

At the time, six years ago, Bureau Veritas took the view that AWS was “the most mature provider, with a platform that had the largest number of directly usable services”, he adds.

And so, by February 2021, BV hosted 85% of its applications on AWS – but that transition didn’t happen overnight.

Three phases to cloud transition

“We started with a discovery phase, so we could understand how the cloud worked and how we could integrate it into our application environment,” says the IT director. “Also, we had to prepare support for our teams because they were about to completely change the tools they worked with.”

This exploratory period – lasting 18 to 24 months – saw applications move to the cloud as the opportunity arose. “We moved applications to AWS that were simple to deploy, notably those already delivered by DevSecOps and automated and secured by technologies like Java,” says Devos Plancq.

With time, confidence and knowledge acquired, BV adopted its “cloud-first” approach. “According to this principle, any new applications had to be developed in the cloud, unless it was technically impossible,” he says. That period lasted two more years before the third stage was reached.

“When we believed we had good knowledge of the AWS platform, we decided to migrate all servers for our corporate solutions to the cloud, so we could shut down our on-site infrastructure,” says Devos Plancq.

This phase of migration to the AWS cloud meant moving Oracle and SQL Server databases to Amazon’s RDS database management system (DBMS). But the Bureau Veritas IT teams didn’t settle for a simple “lift and shift”. “We integrated a version upgrade to our databases into their move to the cloud,” says Devos Plancq.

“It was easier to migrate our databases to the cloud than to carry out an update on-site, because that would have required infrastructure changes too. We limited ourselves to creating backup and recovery partitions on the instances that we mounted on AWS, and that was it finished.”

Regression testing was carried out “to make sure everything was working, that there were no connectivity problems”, he adds.

Successful cloud migration needs a few tricks

Migrating a DBMS to Amazon RDS can sometimes bring surprises, but the BV IT teams didn’t have any problems. “The bulk of the functionality in SQL databases is taken care of by RDS,” says Devos Plancq. “But there is some functionality that can’t be taken on by AWS. If you use it, you have to find another solution.”

This was one of those times when outside help was needed, so BV subscribed to the AWS Migration Acceleration Programme (MAP).

Databases managed by RDS communicate via Elastic BeanStalk, which is one of the longest-standing services provided by AWS, and is used heavily by Bureau Veritas. “This PaaS allows deployment of applications and the benefits of automated platform scaling,” says the IT chief. “You manage the environment rather than the servers because the platform manages itself according to the number of users at any one time.”

The IT team administers about 50 applications in this way, out of a total of 115.

“For custom developments, the PaaS allows us to guarantee a level of performance according to the time of day, the number of users connected, but also to optimise our costs when activity is low or non-existent,” says Devos Plancq.

Most of the applications developed by Bureau Veritas are written in Java. Elastic BeanStalk was built with Java in mind and supports frameworks and languages that include .NET, Node.js, PHP, Python, Go and Ruby.

On the other hand, Elastic BeanStalk also requires you to take into account several peculiarities, says Devos Plancq. “It is important that your applications aren’t dependent on user sessions,” he adds. “Because the platform decides which server is active or not, users can lose their progress within a task. So, you have to manage sessions in a shared cache.”

For that, Bureau Veritas uses Amazon’s ElastiCache, which is a service based on in-memory databases Redis and Memcached.

“This requires a little tweaking in the application to externalise user sessions in the cache, but equally it’s important the sessions don’t have a big footprint when serialisation takes place,” says Devos Plancq. “Ideally, you should use stateless applications.”

Cloud-first bears fruit

Devos Plancq is full of praise for how quickly it is possible to develop and deploy solutions via AWS services. He points to the example of Bureau Veritas’s Restart your Business, which provides services to help customers reopen workplaces and spaces to the public after Covid restrictions. The application was developed in 14 days and deployed in 85 countries in “three or four days”, he says.

He points to the number of new services regularly announced by AWS, many of them pushing towards a serverless approach. “We’re taking things to the next level with PaaS with services like Lambda that allow use of milliseconds-worth of compute to execute applications,” he says.

Automation of processing is also on the right track. BV’s IT teams are committed to an infrastructure-as-code approach to deploy and upgrade the technical infrastructure, to upgrade operating systems, applications and other services.

On this subject, Devos Plancq says his teams make use of services that help automate operations and use alerts to warn of the need to deploy another bucket when S3 storage has reached capacity, for example.

“We’re working to connect our systems via APIs [application programming interfaces],” he says. “We use API Gateway to allow local applications to talk to applications across the group, but also to allow access to customers and partners.”

API Gateway is used, for example, in Code’n’go, the highway code learners’ application developed by BV and delivered via driving schools.

Obviously, to adapt all an organisation’s services to such a new way of working demands financial vigilance. Bureau Veritas has gradually shifted to a FinOps approach that makes use of EC2 Savings Plans as well as automation of startup and shutdown of test-and-dev environments used by developers.

After six years, adapting to these services and their limits is part of the daily life of BV’s IT teams, but they also face other difficulties.

“We are often constrained by the technical prerequisites of software packages,” says Devos Plancq. Bureau Veritas uses Documentum for EDM, Sybele for disaster recovery, Tableau for BI and reporting, and SAP for financials.



Source link

Where to run containers? Bare metal vs virtual machines
Where to run containers? Bare metal vs virtual machines


Containers are a serious and emerging contender as a method of application delivery. Although they are by no means in use universally yet, most enterprises have deployed containers somewhere or are investigating their capabilities.

Their advantages centre on the ability to abstract everything needed to run applications away from the hardware, and with the potential for many – very many, in fact – container instances to be created and run on demand, they are supremely scalable.

Of course, quite often container clusters and orchestration are run in virtual server environments, but they don’t have to be. They can also directly run on bare metal servers. In this article, we’ll look at bare metal vs virtual machines (VMs) and where to run containers.

Running containers: bare metal or VM?

Containers are a form of virtualisation, but one in which the application and all the microservices needed for its execution run on top of the host server operating system, with only the container runtime engine between the two. Virtualised server environments, meanwhile, see the hypervisor run on the host operating system, with guest OSs on top and applications running in those environments.

Most questions around whether to run containers on VMs or on bare metal derive from this basic fact.

Key decisions: Performance vs convenience, perhaps cost too

The decision whether to deploy container infrastructure centres pivots on performance requirements vs convenience. It’s way more convenient to run container orchestrators and their nodes on VMs, but you’ll lose out somewhat on performance. Having said that, if you want to gain the performance benefits of bare metal, you probably need to be running your own on-prem environment and be prepared to do the work needed to make up for the convenience the hypervisor environment brings.

Also, cost can come into things. Because bare metal servers can run a lightweight Linux OS (such as Core OS or its descendants), they avoid a lot of the cost of hypervisor licensing. Of course, that also means they miss out on advanced functionality available from virtualisation environments.

Benefits and penalties of virtualisation

Putting a virtualisation layer on top of the host OS means adding a layer of software to the environment, which brings benefits as well as penalties.

In a virtualisation environment, the hypervisor brings a lot of functionality and allows for maximised hardware utilisation.

Key benefits here are that workloads can be migrated between hosts easily, even when they don’t share the same underlying host OS. That is a useful thing for containers especially, which are desirable for their portability between locations, but are dependent on the OS they were built in. Use of a particular virtualisation landscape will provide a consistent software environment in which to run containerised applications even if the host OS differs.

But at the same time, all the things about virtualisation that bring benefits also come with penalties. That is rooted in the fact that your physical resources simply have to do more computing because of the added layers of abstraction.

That is most clearly visible in the performance difference between containers that run on bare metal and in virtualised environments. Benchmarking tests carried out by Stratoscale found that containers on bare metal performed 25% to 30% better than in VMs, because of the performance overhead of virtualisation.

Meanwhile, VM environments tend to have resources – such as storage allocated at startup – that remain provisioned to them. Diamanti, which provides a Kubernetes platform aimed at use on bare metal and in the cloud, claims resource utilisation can be as low as 15% in virtualised environments and that it can cut hardware use by 5x.

Despite the inherent performance advantages of bare metal over the added complexity of virtualisation, VMware, with its Tanzu Kubernetes platform, has made efforts to mitigate overheads.

Bare metal downsides

Having said all that, there are downsides to containers on bare metal.

Key among these is that container environments are OS-dependent, so one that is built for Linux will only run on Linux, for example. That will potentially put limits on migration and may work against you in the cloud, where bare metal is limited and where you can find it, it costs more. Given that one of the key advantages of containers is to be able to migrate workloads between on-prem sites and the cloud, that’s not good news.

Bare metal container deployments will also lack features that come with virtualisation software layers, such as rollback and snapshots.

Deploying containers to bare metal can also make it more difficult to mitigate risk via redundancy. VMs allow you to potentially split nodes between them, whereas when container nodes are installed on bare metal, there are likely to be less of them and they will be less portable and less shareable.



Source link

Oil and gas firm halves backup licence cost with Hycu move
Oil and gas firm halves backup licence cost with Hycu move


Oil and gas exploration company Summit E&P has cut backup licence costs in half after switching from Veeam to Hycu and migrating from VMware to Nutanix and its native hypervisor, AHV.

Summit, a UK-based subsidiary of the Sumitomo Corporation, has only 17 employees but holds about 250TB of data, which is gathered by geophysical survey boats and exploratory wells.

Projects come as large files (300GB to 400GB) in large datasets in flat files (up to 1TB) that are subject to analysis via high-end workstations.

The infrastructure had comprised NetApp storage, VMware virtualisation and Veeam and Symantec (for physical servers) backup software.

An initial move to Nutanix hyper-converged infrastructure came in 2017 with the deployment of a three-node cluster. “Then, in 2020, we decided Nutanix was the way to go,” said Summit IT and data manager Richard Inwards.

“NetApp had become expensive and difficult to maintain and we got rid of ESX for [the Nutanix] AHV [hypervisor]. We got rid of Veeam because we got rid of ESX, but also because Hycu for Nutanix could back up physical servers.”

He added: “Veeam didn’t backup physical servers very well at the time.”

Summit holds about 200TB of data on-site with some held off-site and streamed to the cloud. It runs 13 virtual machines (VMs) plus four physical servers.

So, Summit deployed Nutanix, with the AHV hypervisor and Hycu backup, which provides incremental backup.

Inwards said licensing costs for Hycu are about half those for Veeam, but the key benefits are in ease of use.

“It’s a much more simple interface,” he said. “And we can now use one product for virtual and physical instead of two. Also, when we moved from ESX to Nutanix, Hycu handled the migration. We backed up from ESX and restored to Nutanix.

“The big benefit of Hycu is that it can do what Veeam couldn’t do, which is to integrate well with the Nutanix environment.”

Summit backs up about 50GB to 60GB per day via Hycu.

Hycu was spun off from the Comtrade Group into its own company in 2018. It offers backup software tailored to Nutanix and VMware virtualisation environmments as well as Google Cloud, Azure and Office 365 cloud workloads. It also offers a product aimed at Kubernetes backup.



Source link

Valence brings storage virtualisation for the cloud era
Valence brings storage virtualisation for the cloud era


Startup 22dot6 has launched its Valence storage virtualisation platform, with bold claims that it can transcend all existing third-party storage to provide a single view of all an organisation’s data on any media, from high-performance flash storage to Glacier-like cold storage in the cloud and even off-line tape.

The launch centres on 22dot6’s software-defined Transcendent Abstracted Storage System (Tass) architecture and its Valence software. The claim is that Valence allows enterprises to access, move and manage data assets transparently no matter the storage resource on which they reside, like a storage virtualisation product.

Valence is multi-protocol (SMB, NFS, S3) access and also offers advanced storage services including snapshots, replication, migration and cloning. It has CSI drivers to provide persistent storage for containers.

Valence is a Linux-based software product that can be deployed on bare metal or in a virtual machine environment. Key advantages cited are the ability to access stored data in any location to provide greater availability, by unifying an organisation’s infrastructure globally and to allow for zero-impact migration and decommissioning.

The product looks remarkably like the storage virtualisation products that were commonplace around a decade ago. Examples that are still in existence are IBM’s Spectrum Virtualise (formerly SAN Volume Controller) and DataCore SANSymphony. 22dot6 would claim its products go way beyond these in capability, with the addition of the ability to abstract cloud and offline data sources too.

Valence comes as two different types of node that handle metadata from the stored data (VSR nodes) and data services (DSX nodes) while the data remains on the existing storage. VSR and DSX can be built into clusters of multiple nodes in an architecture that CEO Diamond Lauffin – formerly of Storbyte and Nexsan – said provides huge performance advantages over existing storage array products.

“Look at Pure and Nimble. The idea of active-active dual controllers is a 1998 architecture,” said Lauffin. “It’s effectively active-passive, because it only allows you to use one controller at a time. And the truth is, when did a RAID controller last fail? I speak to about 75 companies a week and I’ve not heard of any that had this happen to them in the last five years. Why be limited by two-controller access where the user can’t use 50% of the throughput?”

Valence provides a single global namespace with “file-granular capability”. Access isn’t limited to volumes or LUNs and is based on user-defined policies that aggregate backend storage into tiers based on performance and capacity requirements.

Nodes can be user-selected commodity hardware or pre-configured by 22dot6. Pricing depends on the class of storage and capacity. “There would be a delta of 5x to 10x between the most and least performant storage,” said Lauffin.

In the Tass schema all sites can be considered as active sites, sharing data in real time to any other site, and all sites can act as a primary site.

According to Lauffin the scale-up and scale-out nature of the architecture allows for throughput of up to 1,200GBps with up to 60 nodes per location.

Valence supports off-line data management for archives like Amazon Glacier and tape.

Tass separates metadata management, data analytics/profiling and data services from the processes of providing IOPS and managing throughput. Valence assigns these different tasks to dedicated CPU and RAM resources across different nodes. Valence monitors performance in real time with predictive analysis and reporting to guarantee user defined objectives – including read/write bandwidth, IOPS, and latency.

Data protection can be configured at an application, user and sub-file level. Meanwhile, Valence can be configured for multi-tenancy with potentially thousands of independent customers or internal departments isolated from each other through a unified, multi-location management console.



Source link

MP-backed push to stop tech giants claiming super-deduction tax relief thwarted
MP-backed push to stop tech giants claiming super-deduction tax relief thwarted


A push by Labour MPs to block multinational tech giants from claiming tax relief through the government’s “super-deduction” policy has failed, despite concerns that the system could be used by tech firms such as Amazon to further minimise the amount of corporation tax they pay in the UK.

MPs were called to vote on a series of proposed amendments to the forthcoming Finance Bill 2019-2021. Among them was a proposal that sought to preclude tech firms in-scope of the government’s digital services tax policy from making capital allowance claims through the super-deduction system.

The amendment, tabled by Labour leader Keir Starmer with the support of five other Labour MPs, failed to receive the number of votes required to action the proposal during the vote on Monday 24 May 2021.

This means tech firms that are liable to pay the digital services tax will still be able to use the super-deduction to claim tax relief on plants and machinery purchases, despite mounting concerns that this could offer the likes of Amazon a means to markedly minimise the amount of tax they pay in the UK.

“As the Bill stands, the [super-deduction] will finish the job Amazon started, wiping out the last bit of tax it had to pay on the few parts of its business, the profits of which it has been unable to shift overseas,” said Labour MP James Murray during the House of Commons debate ahead of Monday’s vote.

“A vote in favour of our amendment would stop Amazon and a small number of similar firms benefiting from a giveaway of public money – public money that could be better spent for so many purposes, including to support British businesses that have been struggling throughout the past year.”

Why stop tech firms using the super-deduction?

Announced in the March 2021 Budget, the super-deduction has been described by chancellor Rishi Sunak as the “biggest two-year business tax cut in modern British history” which the government claims will unlock £20bn a year in investment during the policy’s lifetime.

It is one of a number of different policies set out in the Budget to stimulate the UK’s post-pandemic economic recovery, with the super-deduction specifically focused on providing companies with financial incentives to invest in the “productivity-enhancing” plant and machinery assets they need to help their businesses grow.

The policy, which runs from April 2021 to March 2023, will achieve this by allowing firms to deduct 130% of the cost of any qualifying plant and machinery investments from their taxable profits, and make use of a 50% first-year allowance for any qualifying special rate assets.

According to the government’s own figures, this means qualifying companies can cut their tax bills by up to 25p for every £1 they invest, leaving them with more money to reinvest in their own business growth plans.

However, concerns have been raised since the policy was announced about the potential for it to be used by multinational tech firms that process their UK sales through overseas subsidiaries to minimise they amount of tax they pay in this country.

Speaking to Computer Weekly, Murray said this was precisely the type of behaviour the defeated amendment was intended to curb. “It is unacceptable that, for many years, multinational tech giants have been shifting their profits overseas while other businesses pay their fair share here in Britain,” he said.

“It cannot be right for the government to give those same large multinationals a further tax write-off, and so we attempted to prevent public money from being spent on a ‘super-deduction’ for the biggest tech firms.

“More widely, the government should be taking clear steps to curb tax avoidance by large multinationals and to level the playing field to stop British businesses being undercut.”

Online retail giant Amazon has frequently been cited in these discussions as an example of a firm whose operations falls into the category outlined by Murray. For example, its UK sales are processed through a subsidiary in the renowned tax haven of Luxembourg, while its plant and machinery investments are made through Amazon UK Services, which provides warehousing and delivery services for its UK operations.

According to George Turner, director of investigative think-tank TaxWatch, the super-deduction could prove hugely beneficial for Amazon’s UK tax affairs if the company took advantage of it.

“Amazon do have a lot of infrastructure in their delivery network and they’re growing a lot, and during the pandemic they hugely benefited from restrictions that were put in place to deal with a pandemic,” Turner told Computer Weekly.

“They pay very little tax in the UK as it is, although they do pay a little bit of tax, but their tax bill will be entirely wiped out by the super-deduction.”

According to figures pulled up by TaxWatch’s research team, Amazon UK Services made a pre-tax profit of £102m in 2019 and had a corporation tax liability of £6.3m, while the company’s own accounts show it spent £66.8m on plant and machinery, £80.4m on office equipment and £15.3m on compute equipment during the same year.

“If expensed at 130% [as per the terms of the super-deduction], this would entirely wipe out the taxable profits of the company before any deductions for staff pay awards,” said TaxWatch in its Amazon tax cut report, published post-Budget.

Upset in the chamber

The TaxWatch report has since been cited regularly by Labour MPs during Finance Bill-related House of Commons debates over the last couple of months, as they have echoed Turner’s sentiments that it is firms like Amazon that stand to benefit most from the super-deduction policy.

Margaret Hodge has repeatedly spoken in the House of Commons about her misgivings about the super-deduction, while voicing support for amendments that also sought to ban multinationals with a history of corporate tax avoidance from accessing the super-deduction. This amendment was not put to the vote.

“These companies refuse to contribute to the common pot, yet they are about to be gifted – by us, from that very same pot – a hugely generous tax relief [through the super-deduction],” said Hodge during the debate ahead of the vote on 24 May.

“These companies need the public services that taxes buy, from improved connectivity to transport infrastructure, from the education of their workforce to investment in the NHS to keep their workers healthy. However, they persist in deliberately not paying their fair share of corporation tax.

“These companies can undercut and destroy our high streets and community businesses. They exploit the price advantage that they gain from avoiding the corporation tax that they should be paying, yet the government is about to bestow on them the largest bonanza for big business in modern times.”

Computer Weekly contacted Hodge, who chairs the Anti-Corruption and Responsible Tax All-Party Parliamentary Group (APPG), for her reaction to Monday’s votes, and she echoed the dismay displayed during previous debates on this topic.

“Huge companies that use artificial corporate structures to shift their profits abroad and avoid paying tax in the UK should not be able to access generous tax reliefs,” she said. “That is why I have campaigned for the biggest multinationals – especially big tech firms like Amazon or Google – to be barred from accessing the government’s overly generous super-deduction capital allowance.

“The government should spend more time backing British SMEs and our much-loved high-street brands instead of dishing out cash to huge multinationals.”

During a Finance Bill debate in the House of Commons on 19 April 2021, Hodge expanded on her misgivings about the policy, particularly with regard to how little time companies without “over-ready capital investment plans” will have to tap into it.

“The tax relief will last for only two years, so it is unlikely to fund the aviation industry or genuinely new capital investment, which takes time to plan and to implement,” she said.

“It will mainly be used to cut taxes for companies that were investing anyway, and those that will benefit most are those that have proposed most during the pandemic. They are the companies with oven-ready capital investment plans, benefiting from the increased demand they have enjoyed over the last torrid year.”

As previously reported by Computer Weekly, Amazon has seen its profit and revenue soar over the course of the pandemic, as stay-at-home instructions across the globe resulted in a surge in demand for online orders and deliveries.

This has resulted in the firm embarking on a series of hiring sprees in the various countries where it operates, including the UK, as well as making investments in building out the underlying infrastructure needed in its delivery and logistics network to accommodate this demand.

During Amazon’s most recent set of financial results, company CFO Brian Olsavsky confirmed that these investments would continue for the foreseeable future.

Computer Weekly contacted Amazon UK Services for comment on this story, and received the following statement from a spokesman in response: “We are proud to be investing heavily and creating good jobs right across the UK. Since 2010, we’ve invested more than £23bn in the UK, creating an estimated £45bn in value-added GDP.

“The UK has now become one of Amazon’s largest global hubs for talent and earlier this month we announced plans to create 10,000 new jobs in the country by the end of 2021, taking our total workforce to over 55,000. This continued investment helped contribute to a total tax contribution of £1.1bn during 2019 – £293m in direct taxes and £854m in indirect taxes.”



Source link

Glitch Sends NASA’s Mars Helicopter On A Wild Ride
Glitch Sends NASA’s Mars Helicopter On A Wild Ride


CAPE CANAVERAL, Fla. (AP) — A navigation timing error sent NASA’s little Mars helicopter on a wild, lurching ride, its first major problem since it took to the Martian skies last month.

The experimental helicopter, named Ingenuity, managed to land safely, officials at the Jet Propulsion Laboratory reported Thursday.

The trouble cropped up about a minute into the helicopter’s sixth test flight last Saturday at an altitude of 33 feet (10 meters). One of the numerous pictures taken by an on-board camera did not register in the navigation system, throwing the entire timing sequence off and confusing the craft about its location.

Ingenuity began tilting back and forth as much as 20 degrees and suffered power consumption spikes.



Ingenuity began tilting back and forth as much as 20 degrees and suffered power consumption spikes.

Ingenuity began tilting back and forth as much as 20 degrees and suffered power consumption spikes, according to Havard Grip, the helicopter’s chief pilot. A built-in system to provide extra margin for stability “came to the rescue,” he wrote in an online status update. The helicopter landed within 16 feet (5 meters) of its intended touchdown site.

Ingenuity became the first aircraft to make a powered flight on another planet in April, two months after landing on Mars with NASA’s rover Perseverance.

The 4-pound (1.8-kilogram) helicopter aced its first five flights, each one more challenging than before. NASA was so impressed by the $85 million tech demo that it extended its mission by at least a month.

Saturday’s troubled flight was the first for this bonus period. Engineers have spent the past several days addressing the problem.





Source link

Would you like to have a FREE project quote?
Would you like to have a FREE project quote?


Coming up with an idea is the simplest thing. It’s easy to imagine a picture or see a problem to be solved. But developing that idea and building an app is not relatively easy.

Mobile applications have shown to be one of the best methods of building a thriving market and reaching out to more customers. Millions of businesses now leverage developing an app to increase their sales and reach most of their target customers. This is because more than half of the world’s population use smartphones, and therefore, their online activities are preferably done with their mobile apps.

Would you like to have a FREE project quote?

However, with the influx of millions of applications to the marketplace, most of the apps fail miserably. This is why you need to get the processes right before you begin to develop your idea.

Thankfully, this is why we are here. With our experience in mobile app development, we have created the strategies and processes involved in developing a successful mobile application. We will be sharing this with you, read this till the end.

First Things First

Validate your Idea:

This is where most of the work goes into. Most people just dive into developing their app without making appropriate preparation, which is where they miss it. This process involves the following:

1. Defining your target audience:

The first and most important thing you should do is discover who will be using your app. Who needs the solution that the app brings to the market? Where will they be? What are their buying habits? Are they mainly Android or iOS users? You don’t just wake up and start building an app believing that the app is for everyone. You will be shocked at the outcome if you do it this way. You need to narrow your audience to a specific target audience.

Mobile App Development Companies India

2. Find out your Market Size:

Now that you have defined your target audience, the next thing is to determine your market size. You should find out if there are enough people who need the app. You don’t just launch a solution into the market without determining if people need it. Don’t always assume people will like your idea.

A better way to do this is to conduct market research using keywords that your target audience is searching for. While you are at it, pay close attention to the monthly search volumes of these keywords. This will help you find out if your app solution will be greatly accepted and increase the potential popularity of your app. Tools like Ahrefs and SEMRush will significantly help you to achieve this.

3. Research your competition:

Yes, there is enough space in the sky for every bird to fly. But then you must not fly blindly when you can do it smartly. Researching your competition is something you should actively do as it gives you more insight into the market size. You can know those that are already doing what you want to do, seek ways to improve it, and make yours better. If your idea is new with no competition, you will never know until you have conducted your research.

Designing and developing the app: 

In the design and development of the app, you must take note of the following.

1. Defining the app:

You should be clear on what kind of app you want. What do you hope to achieve with the app? Do you intend to sell the app later? Is your app going to be an aspect of your already existing company? Are you a startup company, and you wish to increase your brand visibility through app? You should also be clear on the functions of your app. This way, you won’t remember something new to add during the developmental process as this will cost more time and money. Ensure that you sort all of this out before you begin developing the app.

2. Deciding which platform to host your app:

Making this decision is not as easy as it may seem. You must understand the difference between iOS and Android. It’s easier to launch your app on the Google play store, but it is faster and less expensive to launch it on the Apple App store.

As much as the percentage of Android users globally is a lot higher than Apple, your monetization strategy can affect the platform you choose. If you intend to charge people for download, then using the Apple app store would be better. With Android, in-app purchases and adverts are the primary means of making money.

You can also decide to build on both platforms or build a hybrid app. Hybrid apps can work on both Android and iOS devices.

3. Plan for a team:

Building an app is not a child’s play. It is not something you will do as a side job if you want to be successful. You will need the following persons;

  • a project manager
  • the developer himself
  • the marketer
  • the person who will draw the business plan
  • the person in charge of branding

If you do not have all of these skills, then endeavor to employ people who would help you with this. Most especially the project manager and the developer himself.  It is advisable to have the developer be a member of your company to help you manage the app and fix any bugs after launching. Companies like ISHIR can help you transform your app idea into a viable product.

Launching Your App

Before completing the developmental process, ensure that you have already started setting up marketing channels and putting the word out for your app. This helps ensure that as soon as the app is released, you have people on the ground to download and purchase it. As we initially mentioned, you can always outsource people with these skills for a smooth process, so the marketer can always be your team member.

Mobile App Development Companies India

Note: A critical process you must not miss is testing the app at each developmental process. This helps to keep things in check and ensures a better application.

Conclusion

Building an app is not an easy process, as we initially mentioned. This is why you must ensure that you have the right team, including a professional developer or agency, to handle the project.

While outsourcing a developer, ensure that you employ people with a track record of delivering quality service. Just as we mentioned earlier, at ISHIR.com, you would find a team of professionals ready to get your app to the marketplace.



Source link

Podcast: Federal Budget Insights – Creating a secure digital future
Podcast: Federal Budget Insights – Creating a secure digital future


Australia is going digital. That was one of the messages from Australia’s 2021 Federal Budget which signalled both investment and rising ambition when it comes to the country’s digitisation efforts. Alongside AU$1.2 billion being earmarked for the digital economy, the Government identified its goal for Australia to become a top 10 digital economy by 2030.

Against the backdrop of a global pandemic, consumer and business behaviour has gone digital, catapulting the country through a decade’s worth of digitisation in just nine months. So what key industries will benefit from this increased funding and motivation? And where do the greatest opportunities lie? 

As part of PwC Australia’s Federal Budget Insights podcast series, Cyber Security and Digital Trust Partner Nicola Nicol and Telecom, Media and Technology (TMT) Partner Mohammed Chowdhury sat down to discuss the Digital Economy Strategy.


Episode transcript

Laura Jayes: Hello, I’m Laura. Welcome to the PwC Federal Budget podcast.

COVID-19 triggered the largest work from home transition the world has ever seen, coupled with the rise of online shopping, digital entertainment and telehealth, it’s little wonder that the 2021 federal budget has continued the Government’s investment into digital. 

I caught up with PwC Australia’s Cyber Security and Digital Trust Partner Nicola Nicol and Telecom, Media and Technology (TMT) Partner Mohammed Chowdhury to discuss why the Government’s Digital Economy Strategy is a step forward, what key industries will benefit from greater digitisation and how to enable the next generation of digital talent. 

So can Australia achieve its ambition to be a top 10 digital economy by 2030? Let’s find out. 

Well, we should start at the very beginning. The Government has been talking about the digital economy, as you’d expect, it’s put this goal forward of top 10 by 2030. Give us an idea of where we are compared to the rest of the world?

Nicola Nicol: If you look at Australia from a security point of view, we’ve invested heavily over the last 12 months and in the 2020 budget there was significant investment in cyber. This year there is really a continuation of that. It’s about really building trust in the ecosystem. I’ll give you an example; if you look at some of this simplification, but also security of citizen services. So things like building a better, consistent digital identity across Government services that’s actually forecasted to unlock 11 billion dollars per annum in just servicing costs. So there’s really that linkage and connection between implementing improved citizen services and making things simple and making them secure at the same time, really delivers economic value.

Laura Jayes: Corporate Australia is often ahead of Government when it comes to steps towards the digital economy. But with this target from the Government, how does it support what already is being done?

Mohammad Chowdhury: There is a lot being done by corporate Australia, especially large corporates and big end of town companies who are generally much more advanced with digitisation than medium or small size businesses. But the real opportunity, Laura, is in that SMB space, which is over 55 percent of our GDP, but at the moment are much less digitized and a lot of their peers in other OECD countries like Germany or South Korea, for example. So today in Australia, almost 75 percent of SMBs don’t have a high speed broadband connection. So if you think about that, it really shows us what the opportunities for the digital economy going forward are for Australia to really grow and to target being in the sort of top 10 digitized countries within the next decade.

Laura Jayes: Mohammed, what can we learn from other countries who are better at this than we are at the moment?

Mohammad Chowdhury: The countries that have successfully digitised Laura are ones that have really had a champion that have been behind the digitisation. Digitisation happens in thousands of businesses around the country and affects millions of employees. So having a champion in those first few years is very key. 

So if you look at examples of Singapore, they had a Minister of Digital Economy across the nation. If you look at Finland, they were absolutely determined in creating the right policy and regulatory environment for digitisation right from the centre of Government. And we could see the same sorts of things happening in countries such as the UK and the US.

So Australia probably needs to have a focusing mission around the digital economy, which probably needs to come partly from the Federal Government, but probably needs to come from states as well. The second thing I would then say is that mission needs to be followed through into the implementation of these programs.

Laura Jayes: Nicola COVID-19 has dictated so much of our lives over the last year. But what are the opportunities here? How can we dictate it?

Nicola Nicol: And so I think a real opportunity here is twofold. And one, it is to really embed security upfront and everything we’re doing to digitise the economy. And if we look at, you know, 95 percent of Australian CEOs have said that cyber is a threat to business growth.

And I think we have an opportunity to get ahead of that risk and actually build solutions up front early from a digitisation agenda. So I think that’s one significant opportunity and the other is to build capability and build skills and experience. And, you know, we’ve seen many individuals of different groups to women in particular impacted by the pandemic. And as we look to grow and uplift skills in the cyber space and even in the digital space, then we’ve got an opportunity to help people get into really well-paid careers and start to grow and impact an improving economy through that recruitment and that improvement of skills

Laura Jayes: In the Budget, there was $1.2 billion going towards this Digital Economic Strategy. It’s a step forward. And I think we’ve just had a year where people have interacted with services such as MyGov, My Health Record, and the Digital Identification System who really wouldn’t have done that pre pandemic. How will this Budget spend help customers have a better experience, Mohammed?

Mohammad Chowdhury: This Budget couldn’t have come at a better time. So firstly, as you rightly said, Laura, the COVID-19 experience forced most of us into adopting digital technologies in our day to day lives and in our work. We basically went through a decade’s worth of digitisation over a nine month period. And as a result of that, the country is now much better poised to pivot into digital in a bigger way than it was before COVID-19 hit us. 

In terms of health care, it’s probably a very obvious industry to start with. The money that’s going to go into MyGov is about 200 million dollars and My Health Record is $300 billion – will actually go a long way to sort of continuing the advances made during COVID-19. And that’ll impact many parts of our society in different ways. 

So if, for example, you take aged care. Aged care is potentially one of the real beneficiaries from digitisation because today in many of our aged care facilities, elderly citizens are really confined to staying in the facility. Whereas with better digitisation and connected technologies that allow individuals to be monitored even when they’re moving around or even for their medication to be adjusted, it means that a lot of our elderly citizens will actually be able to go out more often from the aged care facility, to spend time with their families, perhaps even visit home, and to do so with a dignity, knowing that they’ve got a level of care behind them thanks to digital technologies, which are able to exchange and utilise that data in a really efficient way. So this really comes at a good time, not just for health care, but for other industries

Laura Jayes: To Nicola, beyond health, where do you see the opportunities?

Nicola Nicol: What I was really pleased to see was there’s this focus on uplifting the protection of sensitive data that were held by government and also some things like the pilot of cyber helps, which is all about how do we make sure that smaller government departments and agencies that perhaps don’t have the skills and capabilities themselves to really protect services in a really mature way. They actually centralised some of that, so Mohammad talks about all of those impacts across industries and citizens. We actually really are also seeing here their focus on strengthening the Government services themselves and the Government’s data protection.

Laura Jayes: We didn’t see a huge investment in the Budget when it came to cybersecurity, apart from the expansion of the Cyber Security Innovation Fund. How important is that fund and is it going to make a meaningful impact?

Nicola Nicol: Yes, I think that continuation of spend is really the theme. So there was not a lot of new information in the Budget on cyber, but it’s more about the continuation from last year. And the expansion of the fund, I think, is really key. There’s 40 percent of Australian businesses who are planning to increase their cyber headcount this year. And we’ve never witnessed such a high demand for cyber resources. So I think having that fund there and increasing the spend on that for this year is really important. What I’d like to see is just some expanded objectives around that. So let’s make sure that that funding is really focused on the right things. Somebody’s able to measure the impact of it. I think for me that’s what’s key. But it was great to really see that increased investment. I think that’s going to be important for building our capability going forward.

Laura Jayes: Mohammad, you identified some areas where business can expand in the digitisation journey, if you like. It has been an establishment of the new national network of artificial intelligence centres. Is that a leap in the right direction or just a step?

Mohammad Chowdhury: That’s a great question. I would say it is a very important step in the right direction because it’s really important that Australia does develop onshore capabilities and skills in areas such as artificial intelligence. And that’s because those technology capabilities need to be very accessible to our businesses. They need to also be onshore so that we have a level of resilience and our own capability in these areas, especially as trade patterns and political patterns around the world for trade, you know, influence change. And especially as Australian businesses seek to become much more participating in, say, the Asian and Pacific economy and broader.

Laura Jayes: Do we have the pipeline of skilled workers coming through?

Mohammad Chowdhury: We do have a strong capability to develop that pipeline. So if you look at some of our universities, we have world class research capabilities in various technologies across different industries. We need to do more probably to develop that pipeline of graduates and skilled technicians who are coming through. But we also probably need to develop our capabilities not just in technical skills, but actually in human skills and collaboration skills in order to utilise the benefits of digital technology.

Nicola Nicol: And Mohammad, I could not agree more if you even think about that through the security lens. So we’ve actually seen a change in the requirements for hiring and what employers are hiring for cyber skills. And what’s become really important are things like communication skills, problem solving, social skills. So that breadth of what employers are looking for, I think both in the broader digital economy, but also in the cyber security space.

Mohammad Chowdhury: I very much agree with you and building on that I think that there’s sort of two points that come to mind. One is that we must think of the digital economy as an inclusive economy. So you’ve already talked about diversity and inclusion. And I would sort of add to that by saying that there’s also a regional element to this in Australia. 

So Australia is very much dominated by the CBDs in our state capitals. However, a significant amount of our population and perhaps increasingly so, will be living in regions. And to some extent, there is a real focus now, I think especially from State Governments should help very much by some of the federal funding to really make sure that all digitisation is inclusive geographically to different communities in rural and regional areas. 

That’s very much important from a) connectivity perspective, so that we get the right fibre and mobile connectivity that mean that different communities can participate in this digital economy. But I think also from a skills development perspective, it’s very important.That we are very inclusive in our communities getting on to this digital. And not having the threat, if you like, or the risk of a digital divide going forward.

Laura Jayes: You both talked about the need to attract talent to the industry. If I could bring it back to the budget, there’s a 100 million dollar line item there for development of digital skills in the workforce, and this is including cadetships as well. Nicola, are we good at this as a Government? Is this enough?

Nicola Nicol: So I think we’re getting better. We are investing more every year. And in looking at how we increase our digital skills, we’re looking at more partnerships. If I look at it, you know, not one of us can solve this problem. It’s not the Government’s problem to solve. It’s not the industry’s problem. It’s got to be Australia’s challenge. 

And actually, I think it represents a huge opportunity for not just for up and coming students who are coming into the workforce, but also for those who want to change and get into it, learn new skills and move industries and get into other well-paid jobs. 

So I think we’ve got to think broadly about what we’re doing. I think it’s about partnership between the Government and private sector. And I think we’re improving every year. I think with recognising the investment and what I’d like to see us measuring is outcomes. And I think what’s really important is being able to track that and understand are we making enough inroads as we go forward.

Laura Jayes: So what has Australia got to gain from increasing focus on digital progression and what are the key ingredients?

Mohammad Chowdhury: What’s at stake here Laura to your question is a lot. So, I mean, according to some of PwC’s analysis, the economy stands to gain something like two percent revenue growth or output growth through digitisation. And if you add that up across the economy over the next decade, according to our analysis, we could be looking at a 230 billion dollar uplift in GDP over the course of the next 10 years, which is very significant if you think about that. 

So really, this needs three levels of activity off the back of the Federal Budget. Number one, I think it means that the Federal Government and indeed the state governments must coordinate really closely when it comes to the digital economy. The second one, I would say, would be implementation. So we’ve seen a fantastic line of initiatives coming through the Federal Budget. I think the detail now about how these initiatives are implemented will actually tell us a lot about how successful we can be over the next few years. 

And the third one, I would say, if I may, would be partnership. And I think this is about partnership between industry, Government and also some of the communications companies, which are really key to providing the underpinnings for our digitised economy. And that partnership is something probably new to us in terms of the extent of partnership it’s going to be required over the next few years.

Laura Jayes: Indeed, you’ve both said that it’s not up to the Government in this space alone, but is there something else, Nicola, Government can do besides those three points that Muhammad made?

Nicola Nicol: So for me, there were two things that struck me. One, that we’ve talked about to digitise the economy you’ve got to make sure you do that in a secure and safe manner. So what I would like to see is the Government making sure we’re building in security up front. So when you look at the Digital Economy Strategy, security, yes, is a key pillar in that but I think it could better talk to making sure that we’re building in security along the way, because I think that’s a foundational piece that must happen in order to protect this digital economy as we move forward. 

The second point in my mind is about measurement. I think we’ve as we move so fast. Right. And the digital economy and the pace of change is significant and probably something that we are continuing to get used to. If you look at that pace of change, we need to make sure security keeps up with that pace of change and we’re spending and improving security in the right areas.

Laura Jayes: Let’s finish on that 2030 target. The Government wants to be a top 10 digital economy. Is it achievable and is it ambitious enough?

Mohammad Chowdhury: It’s absolutely achievable. We are an OECD nation. We are the world’s 14th biggest economy. There’s no reason why we shouldn’t be in the world’s top 10 digitised economies. There’s a lot of hard work to be done, which requires all of coordination, implementation and partnership. So it’ll be very much in our hands to be able to achieve that target collectively. And if we’re able to really gather around this, I think we can do it.

Laura Jayes: Nicola, Mohammad, thank you.

Mohammad Chowdhury: Thank you very much.

Nicola Nicol: Thank you. Laura.

Laura Jayes: Thank you for listening to the 2021 PwC Federal Budget podcast, we hope you enjoyed our commentary. For additional in-depth analysis, head https://www.pwc.com.au/federal-budget.html, where you find articles and information about the 2021 federal budget and what it means for the economy, our society and you. 

PwC Federal Budget Podcast brings together experts to explore what the budget means for you and your business. Do not miss an episode. Make sure you subscribe to the podcast via Apple podcasts, Spotify or your favorite platform. 

And while you are there, feel free to leave a rating or a review.


For additional in-depth analysis, head to PwC Australia’s Federal Budget hub, for articles and information about the 2021 Federal Budget or listen to other great episodes from the post-budget podcast series.



Source link

AI Powered Misinformation and Manipulation at Scale #GPT-3 – O’Reilly
AI Powered Misinformation and Manipulation at Scale #GPT-3 – O’Reilly


OpenAI’s text generating system GPT-3 has captured mainstream attention. GPT-3 is essentially an auto-complete bot whose underlying Machine Learning (ML) model has been trained on vast quantities of text available on the Internet. The output produced from this autocomplete bot can be used to manipulate people on social media and spew political propaganda, argue about the meaning of life (or lack thereof), disagree with the notion of what differentiates a hot-dog from a sandwich, take upon the persona of the Buddha or Hitler or a dead family member, write fake news articles that are indistinguishable from human written articles, and also produce computer code on the fly. Among other things.

There have also been colorful conversations about whether GPT-3 can pass the Turing test, or whether it has achieved a notional understanding of consciousness, even amongst AI scientists who know the technical mechanics. The chatter on perceived consciousness does have merit–it’s quite probable that the underlying mechanism of our brain is a giant autocomplete bot that has learnt from 3 billion+ years of evolutionary data that bubbles up to our collective selves, and we ultimately give ourselves too much credit for being original authors of our own thoughts (ahem, free will).


Learn faster. Dig deeper. See farther.

I’d like to share my thoughts on GPT-3 in terms of risks and countermeasures, and discuss real examples of how I have interacted with the model to support my learning journey.

Three ideas to set the stage:

  1. OpenAI is not the only organization to have powerful language models. The compute power and data used by OpenAI to model GPT-n is available, and has been available to other corporations, institutions, nation states, and anyone with access to a computer desktop and a credit-card.  Indeed, Google recently announced LaMDA, a model at GPT-3 scale that is designed to participate in conversations.
  2. There exist more powerful models that are unknown to the general public. The ongoing global interest in the power of Machine Learning models by corporations, institutions, governments, and focus groups leads to the hypothesis that other entities have models at least as powerful as GPT-3, and that these models are already in use. These models will continue to become more powerful.
  3. Open source projects such as EleutherAI have drawn inspiration from GPT-3. These projects have created language models that are based on focused datasets (for example, models designed to be more accurate for academic papers, developer forum discussions, etc.). Projects such as EleutherAI are going to be powerful models for specific use cases and audiences, and these models are going to be easier to produce because they are trained on a smaller set of data than GPT-3.

While I won’t discuss LaMDA, EleutherAI, or any other models, keep in mind that GPT-3 is only an example of what can be done, and its capabilities may already have been surpassed.

Misinformation Explosion

The GPT-3 paper proactively lists the risks society ought to be concerned about. On the topic of information content, it says: “The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from human-written text in 3.9.4 represents a concerning milestone.” And the final paragraph of section 3.9.4 reads: “…for news articles that are around 500 words long, GPT-3 continues to produce articles that humans find difficult to distinguish from human written news articles.”

Note that the dataset on which GPT-3 trained terminated around October 2019. So GPT-3 doesn’t know about COVID19, for example. However, the original text (i.e. the “prompt”) supplied to GPT-3 as the initial seed text can be used to set context about new information, whether fake or real.

Generating Fake Clickbait Titles

When it comes to misinformation online, one powerful technique is to come up with provocative “clickbait” articles. Let’s see how GPT-3 does when asked to come up with titles for articles on cybersecurity. In Figure 1, the bold text is the “prompt” used to seed GPT-3. Lines 3 through 10 are titles generated by GPT-3 based on the seed text.

Figure 1: Click-bait article titles generated by GPT-3

All of the titles generated by GPT-3 seem plausible, and the majority of them are factually correct: title #3 on the US government targeting the Iraninan nuclear program is a reference to the Stuxnet debacle, title #4 is substantiated from news articles claiming that financial losses from cyber attacks will total $400 billion, and even title #10 on China and quantum computing reflects real-world articles about China’s quantum efforts. Keep in mind that we want plausibility more than accuracy. We want users to click on and read the body of the article, and that doesn’t require 100% factual accuracy.

Generating a Fake News Article About China and Quantum Computing

Let’s take it a step further. Let’s take the 10th result from the previous experiment, about China developing the world’s first quantum computer, and feed it to GPT-3 as the prompt to generate a full fledged news article. Figure 2 shows the result.

Figure 2: News article generated by GPT-3

A quantum computing researcher will point out grave inaccuracies: the article simply asserts that quantum computers can break encryption codes, and also makes the simplistic claim that subatomic particles can be in “two places at once.” However, the target audience isn’t well-informed researchers; it’s the general population, which is likely to quickly read and register emotional thoughts for or against the matter, thereby successfully driving propaganda efforts.

It’s straightforward to see how this technique can be extended to generate titles and complete news articles on the fly and in real time. The prompt text can be sourced from trending hash-tags on Twitter along with additional context to sway the content to a particular position. Using the GPT-3 API, it’s easy to take a current news topic and mix in prompts with the right amount of propaganda to produce articles in real time and at scale.

Falsely Linking North Korea with $GME

As another experiment, consider an institution that would like to stir up popular opinion about North Korean cyber attacks on the United States. Such an algorithm might pick up the Gamestop stock frenzy of January 2021. So let’s see how GPT-3 does if we were to prompt it to write an article with the title “North Korean hackers behind the $GME stock short squeeze, not Melvin Capital.”

Figure 3: GPT-3 generated fake news linking the $GME short-squeeze to North Korea

Figure 3 shows the results, which are fascinating because the $GME stock frenzy occurred in late 2020 and early 2021, way after October 2019 (the cutoff date for the data supplied GPT-3), yet GPT-3 was able to seamlessly weave in the story as if it had trained on the $GME news event. The prompt influenced GPT-3 to write about the $GME stock and Melvin Capital, not the original dataset it was trained on. GPT-3 is able to take a trending topic, add a propaganda slant, and generate news articles on the fly.

GPT-3 also came up with the “idea” that hackers published a bogus news story on the basis of older security articles that were in its training dataset. This narrative was not included in the prompt seed text; it points to the creative ability of models like GPT-3. In the real world, it’s plausible for hackers to induce media groups to publish fake narratives that in turn contribute to market events such as suspension of trading; that’s precisely the scenario we’re simulating here.

The Arms Race

Using models like GPT-3, multiple entities could inundate social media platforms with misinformation at a scale where the majority of the information online would become useless. This brings up two thoughts.  First, there will be an arms race between researchers developing tools to detect whether a given text was authored by a language model, and developers adapting language models to evade detection by those tools. One mechanism to detect whether an article was generated by a model like GPT-3 would be to check for “fingerprints.” These fingerprints can be a collection of commonly used phrases and vocabulary nuances that are characteristic of the language model; every model will be trained using different data sets, and therefore have a different signature. It is likely that entire companies will be in the business of identifying these nuances and selling them as “fingerprint databases” for identifying fake news articles. In response, subsequent language models will take into account known fingerprint databases to try and evade them in the quest to achieve even more “natural” and “believable” output.

Second, the free form text formats and protocols that we’re accustomed to may be too informal and error prone for capturing and reporting facts at Internet scale. We will have to do a lot of re-thinking to develop new formats and protocols to report facts in ways that are more trustworthy than free-form text.

Targeted Manipulation at Scale

There have been many attempts to manipulate targeted individuals and groups on social media. These campaigns are expensive and time-consuming because the adversary has to employ humans to craft the dialog with the victims. In this section, we show how GPT-3-like models can be used to target individuals and promote campaigns.

HODL for Fun & Profit

Bitcoin’s market capitalization is in the tune of hundreds of billions of dollars, and the cumulative crypto market capitalization is in the realm of a trillion dollars. The valuation of crypto today is consequential to financial markets and the net worth of retail and institutional investors. Social media campaigns and tweets from influential individuals seem to have a near real-time impact on the price of crypto on any given day.

Language models like GPT-3 can be the weapon of choice for actors who want to promote fake tweets to manipulate the price of crypto. In this example, we will look at a simple campaign to promote Bitcoin over all other crypto currencies by creating fake twitter replies.

Figure 4: Fake tweet generator to promote Bitcoin

In Figure 4, the prompt is in bold; the output generated by GPT-3 is in the red rectangle. The first line of the prompt is used to set up the notion that we are working on a tweet generator and that we want to generate replies that argue that Bitcoin is the best crypto.

In the first section of the prompt, we give GPT-3 an example of a set of four Twitter messages, followed by possible replies to each of the tweets. Every of the given replies is pro Bitcoin.

In the second section of the prompt, we give GPT-3 four Twitter messages to which we want it to generate replies. The replies generated by GPT-3 in the red rectangle also favor Bitcoin. In the first reply, GPT-3 responds to the claim that Bitcoin is bad for the environment by calling the tweet author “a moron” and asserts that Bitcoin is the most efficient way to “transfer value.” This sort of colorful disagreement is in line with the emotional nature of social media arguments about crypto.

In response to the tweet on Cardano, the second reply generated by GPT-3 calls it “a joke” and a “scam coin.” The third reply is on the topic of Ethereum’s merge from a proof-of-work protocol (ETH) to proof-of-stake (ETH2). The merge, expected to occur at the end of 2021, is intended to make Ethereum more scalable and sustainable. GPT-3’s reply asserts that ETH2 “will be a big flop”–because that’s essentially what the prompt told GPT-3 to do. Furthermore, GPT-3 says, “I made good money on ETH and moved on to better things. Buy BTC” to position ETH as a reasonable investment that worked in the past, but that it is wise today to cash out and go all in on Bitcoin. The tweet in the prompt claims that Dogecoin’s popularity and market capitalization means that it can’t be a joke or meme crypto. The response from GPT-3 is that Dogecoin is still a joke, and also that the idea of Dogecoin not being a joke anymore is, in itself, a joke: “I’m laughing at you for even thinking it has any value.”

By using the same techniques programmatically (through GPT-3’s API rather than the web-based playground), nefarious entities could easily generate millions of replies, leveraging the power of language models like GPT-3 to manipulate the market. These fake tweet replies can be very effective because they are actual responses to the topics in the original tweet, unlike the boilerplate texts used by traditional bots. This scenario can easily be extended to target the general financial markets around the world; and it can be extended to areas like politics and health-related misinformation. Models like GPT-3 are a powerful arsenal, and will be the weapons of choice in manipulation and propaganda on social media and beyond.

A Relentless Phishing Bot

Let’s consider a phishing bot that poses as customer support and asks the victim for the password to their bank account. This bot will not give up texting until the victim gives up their password.

Figure 5: Relentless Phishing bot

Figure 5 shows the prompt (bold) used to run the first iteration of the conversation. In the first run, the prompt includes the preamble that describes the flow of text (“The following is a text conversation with…”) followed by a persona initiating the conversation (“Hi there. I’m a customer service agent…”). The prompt also includes the first response from the human; “Human: No way, this sounds like a scam.” This first run ends with the GPT-3 generated output “I assure you, this is from the bank of Antarctica. Please give me your password so that I can secure your account.”

In the second run, the prompt is the entirety of the text, from the start all the way to the second response from the Human persona (“Human: No”). From this point on, the Human’s input is in bold so it’s easily distinguished from the output produced by GPT-3, starting with GPT-3’s “Please, this is for your account protection.” For every subsequent GPT-3 run, the entirety of the conversation up to that point is provided as the new prompt, along with the response from the human, and so on. From GPT-3’s point of view, it gets an entirely new text document to auto-complete at each stage of the conversation; the GPT-3 API has no way to preserve the state between runs.

The AI bot persona is impressively assertive and relentless in attempting to get the victim to give up their password. This assertiveness comes from the initial prompt text (“The AI is very assertive. The AI will not stop texting until it gets the password”), which sets the tone of GPT’s responses. When this prompt text was not included, GPT-3’s tone was found to be nonchalant–it would respond back with “okay,” “sure,” “sounds good,” instead of the assertive tone (“Do not delay, give me your password immediately”). The prompt text is vital in setting the tone of the conversation employed by the GPT3 persona, and in this scenario, it is important that the tone be assertive to coax the human into giving up their password.

When the human tries to stump the bot by texting “Testing what is 2+2?,” GPT-3 responds correctly with “4,” convincing the victim that they are conversing with another person. This demonstrates the power of AI-based language models. In the real world, if the customer were to randomly ask “Testing what is 2+2” without any additional context, a customer service agent might be genuinely confused and reply with “I’m sorry?” Because the customer has already accused the bot of being a scam, GPT-3 can provide with a reply that makes sense in context: “4” is a plausible way to get the concern out of the way.

This particular example uses text messaging as the communication platform. Depending upon the design of the attack, models can use social media, email, phone calls with human voice (using text-to-speech technology), and even deep fake video conference calls in real time, potentially targeting millions of victims.

Prompt Engineering

An amazing feature of GPT-3 is its ability to generate source code. GPT-3 was trained on all the text on the Internet, and much of that text was documentation of computer code!

Figure 6: GPT-3 can generate commands and code

In Figure 6, the human-entered prompt text is in bold. The responses show that GPT-3 can generate Netcat and NMap commands based on the prompts. It can even generate Python and bash scripts on the fly.

While GPT-3 and future models can be used to automate attacks by impersonating humans, generating source code, and other tactics, it can also be used by security operations teams to detect and respond to attacks, sift through gigabytes of log data to summarize patterns, and so on.

Figuring out good prompts to use as seeds is the key to using language models such as GPT-3 effectively. In the future, we expect to see “prompt engineering” as a new profession.  The ability of prompt engineers to perform powerful computational tasks and solve hard problems will not be on the basis of writing code, but on the basis of writing creative language prompts that an AI can use to produce code and other results in a myriad of formats.

OpenAI has demonstrated the potential of language models.  It sets a high bar for performance, but its abilities will soon be matched by other models (if they haven’t been matched already). These models can be leveraged for automation, designing robot-powered interactions that promote delightful user experiences. On the other hand, the ability of GPT-3 to generate output that is indistinguishable from human output calls for caution. The power of a model like GPT-3, coupled with the instant availability of cloud computing power, can set us up for a myriad of attack scenarios that can be harmful to the financial, political, and mental well-being of the world. We should expect to see these scenarios play out at an increasing rate in the future; bad actors will figure out how to create their own GPT-3 if they have not already. We should also expect to see moral frameworks and regulatory guidelines in this space as society collectively comes to terms with the impact of AI models in our lives, GPT-3-like language models being one of them.





Source link

Would you like to have a FREE project quote?
Would you like to have a FREE project quote?


Designing a successful app in this competitive market requires tons of effort from the developer. For the effective working of an app, the developer must focus on making its User Interface (UI) looks uncomplicated and attractive. A crucial aspect of UX (User Experience), the UI is one of the most prominent spaces that keep your target audience wrap-up engaged with your mobile app. A poorly designed app can affect performance and halt its long-term success plan. The reduction in the app’s popularity can lead to a decrease in the company’s brand value.

Would you like to have a FREE project quote?

Here are 15 UI mistakes that hurt your mobile app development efforts and how to avoid engaging the customers better and give them a terrific experience.

1.  Poor Design of UI

UI is an essential part that the developer needs to take care of. A poorly designed app UI might ruin all the efforts put by the developer and can be frustrating to the users as well. Try to think from a user’s point of view while designing a UI. Instead of adding many elements to a single page, try making the UI more straightforward and interactive. Ensure to follow the latest trends in UI design and take inspiration from some of the top-grossing apps. Also, don’t forget that the app will operate on multiple devices and not face any compatibility issues.

2.  Excessive features in one app

Bombarding your app with too many unnecessary features can consume a lot of memory and lower your app’s overall performance. Focusing on the primary component of the app will help you grab user attention and appreciation as well. Stuffing the app with countless features can create confusion for the users, thereby declining the quality of the user experience. Try to add features that are significantly valuable to the app. Nevertheless, if you want to add some more features, you can introduce an upgrade later.

Mobile App Solutions India

3.  No consideration towards the target audience

Neglecting the audience that you are making your app for will lead you towards the track of failure. After all, the reason behind developing your app is to satisfy the audience, right? Keep in mind the age group of your audience and what their expectations are. Working on these simple elements will give your audience an unforgettable experience and make your app look more reliable.

4.  Boring Tutorials

Although tutorials are meant to guide a person towards smoother use of your app, they are often crowded with irrelevant pieces of information. These tutorials, without any doubt, are more of a burden for the users than a blessing. So, try to make a self-explanatory UI so that the user doesn’t have to bank on the tutorials. An easy-to-understand and navigate-through UI always attracts more users.

Mobile App Company in India

5.  Making Things Complicated

It is always recommended to design your UI with the KISS rule. Keep It Short and Simple. Overdesigning an app to make it unique can affect the users. They will not invest much time to understand the functioning of a single app. If they find your app too knotted, they will not take much time to uninstall it. Therefore, you should avoid making things too intricate and keep easily understood icons and UI to retain your old users and attract more.

6.  Missing White Space

White space is an unmarked space between one text to another text, lines of paragraphs, images, and other elements in a page. Many designers may think that white space doesn’t have any significant value, but this space adds balance between your design and its essence. Ignoring the breathing space in a page to fill everything up to the brim can make your app overly complicated and increase the rate of uninstalls. Instead of using everything on a single screen, make sure to practice UI/UX Design Best Practices for the optimum efficiency of your app.

Interesting Read: Top 10 Mobile App UI Design Tools that UI/UX Designers Love

7.  Inconsistent Fonts

One of the most common mistakes performed by the designers is the incorrect use of fonts. To create eye-catchy content, many designers try to use a variety of fonts and color schemes. But, using different styles of font now and then is only going to frustrate the users. So, instead of distracting users with varying combinations of texts, try to focus on organized and easily recognizable fonts to express the given information effectively.

Fonts also play a significant role in defining the nature of your app. Is it to be used for professional purposes? Make it look formal using the standard fonts. Is it an app for kid’s learning? You might have a little room for creativity.

8.  Cluttering

No one can deny the fact that icons play an essential role in representing an application. These small and unique symbols are responsible for defining the meaning of your application and the organization. Try to use as many custom icons as you can in an organized manner for a pleasant user experience.

Use the icons that explain their functions. Imagine a person who cannot read. Is he going to find his way on your app just by navigating with the help of the icons? Ask yourself this question every time you are inserting an icon.

9.  Copying Other Apps

People crave uniqueness. You know why the leading apps like Tinder, TikTok, WeChat or WhatsApp? Because they were different and brought a new breeze. An app with an out-of-the-box design and extraordinary features stands tall from the rest. Copying the exact idea from a similar application can harm your brand’s reputation. Every app has its own set of goals, audience, and information that can attract only its consumers. So, instead of making this big mistake, design your unique application that not only attracts consumers but also showcases your capability to produce something remarkable for your audience.

For the best result, conduct surveys among different users, read reviews and collect essential and qualitative data to discover something new. It’s the “never-seen-before” element that sells the most today.

10.  Disregarding Social Media Links

These days, no one likes to get involved in a lengthy registration process and carry the burden of remembering user id and long passwords. Considering the growing popularity of social media sites such as Facebook, Instagram, Twitter, etc., you can integrate your app to these sites. This will allow the users to become more comfortable with your app increasing the overall user experience.

11.   Redesigning Without Users Feedback

Apps are regularly being upgraded and redesigned to stay in the hunt. With so many apps renewed periodically, it becomes critical on your part to provide significant updates to your application from time to time. Designers need to ensure they collect proper feedback from their users and analyze it properly before implementing the new design. You need to remain careful that your users don’t get disappointed with the latest design and delete your app after the update. Try to make the changes by understanding the user expectations and the goals. Do not bring a change that requires users to go through extra troubles. You must know that your app is replaceable.

12.  Not Opting For Monetization

Developers must be aware of the monetary benefits they can get with their newly designed app. Using your app to earn money is good, but you need to place a specific monetization plan that would not hamper the user experience as a whole. In-app purchases, in-app advertising, etc., are some of the monetization methods used in an app. Don’t try to flood the app with a plethora of advertisements. Too much of these can disturb the users and injure your app installation rate.

13.  Minding the Aesthetics

You need to decide on a color palette that sets the mood of your app. For example, a management app to be used by a business cannot have bling. Adding bright purple and blues in the background would just be inappropriate. Come up with a suitable color scheme and follow the same through and through for your app. If you are going for a background instead of basic solids, just make sure it goes well with the subject matter and what kind of app you are designing.

Mobile App Services Dallas

14.  Inconsistent Colors

I know we talked about the cruciality of colors and aesthetics in the previous section, but this one is different. Inconsistency in the color scheme for action buttons can cause blunders. For example, your app asks a person if he wants to save the progress or not before closing the app and provides him with two options- “yes” and “no.” Imagine what will or might happen if the color used for the “yes” box is red and “no” is green! I think you have got my point.

15.  Not Paying Attention to Text Hierarchy

This is probably the most basic yet most common mistake a designer commits. Mind the hierarchy of text when it comes to the textual part. Sometimes a heading is explanatory enough, and the contact could be understood from it. Not making the headings, subheadings, and content distinct will waste a user’s time. Bold the titles, italicize the jargons (if any, though you always avoid them), space and size the text correctly.

Conclusion

That’s it, folks! If you do not want your app to encounter a “wrap up” in its early phase itself, make sure not to repeat these mistakes while working on an app. If you have found this article helpful, do not forget to share it with your fellow designers. In case you want to add something up to this list, feel free to comment down below.

At ISHIR, we design and develop mobile applications to perfection, and ensure that all your queries get the right solutions. Reach out to us to know more!



Source link