Infosec, music, beer, , other things geek. Searching for my path to world domination.
1044 stories
·
18 followers

AT&T and Warner Media Leadership Outline Changes Coming to HBO Over the Next Year

1 Comment
On June 19, former AT&T executive and new chief executive of Warner Media John Stankey spoke to a group of HBO employees about changes coming to the premium cable company in the near future. The discussion was held in the wake of AT&T's acquisition of Time Warner, which owns HBO, and also included HBO's chief executive officer Richard Plepler.

The telecommunications company previously stated that it would take a "hands-off approach" to running HBO, but The New York Times this weekend reported on Stankey's speech and it sounds like that might not be the case. According to a video of the discussion, Stankey explained Warner Media's intent to align HBO more alongside streaming companies like Netflix in order to increase its subscriber base, although he refrained from referencing Netflix by name.


This means creating more content that releases at a faster pace, in comparison to HBO's current stable of limited Sunday night-focused shows. According to Stankey, the goal is to increase the hours per day viewers watch HBO, which is currently less than rivals like Netflix and Hulu because of HBO's smaller catalog.
“We need hours a day,” Mr. Stankey said, referring to the time viewers spend watching HBO programs. “It’s not hours a week, and it’s not hours a month. We need hours a day. You are competing with devices that sit in people’s hands that capture their attention every 15 minutes.”
Continuing this thread, Stankey specifically stated that more hours of user engagement means that Warner Media can "get more data and information" to monetize through advertisements and new subscription options.
“I want more hours of engagement. Why are more hours of engagement important? Because you get more data and information about a customer that then allows you to do things like monetize through alternate models of advertising as well as subscriptions, which I think is very important to play in tomorrow’s world.”
As the discussion continued, Stankey appeared to have butted heads slightly with Plepler on the topic of HBO's monetization, which Stankey believes can be increased through his new methods. Plepler claimed that the company is already a consistent moneymaker, to which Stankey responded: "Yes, yes you do... Just not enough."

Stankey and Warner Media hope that an increased output of original content will boost HBO's 40 million paid subscribers in the United States, which Stankey said as of now "was not going to cut it." Comparatively, Netflix earlier this year had 55 million U.S. subscribers and Hulu in May had 20 million.

HBO's business currently expands across paid cable add-on packages, the connected HBO GO app, and standalone HBO NOW app. Stankey said that Warner Media's plans will kick off soon and "there's going to be more work" for HBO employees over the next twelve months, which he called a "dog year."

While Apple wasn't mentioned in the discussion, the Cupertino company is another upcoming competitor in the streaming TV market, with plans to debut more than a dozen television shows beginning sometime in 2019. Although the distribution of these shows remains unclear, the company is rumored to be planning a bundle with original TV content, Apple Music, and more.


Discuss this article in our forums

Read the whole story
DaftDoki
7 days ago
reply
this wont be good
Seattle
MotherHydra
7 days ago
I'm curious to see if they shoot one or both feet.
Share this story
Delete

How AWS uses automated reasoning to help you achieve security at scale

1 Share

At AWS, we focus on achieving security at scale to diminish risks to your business. Fundamental to this approach is ensuring your policies are configured in a way that helps protect your data, and the Automated Reasoning Group (ARG), an advanced innovation team at AWS, is using automated reasoning to do it.

What is automated reasoning, you ask? It’s a method of formal verification that automatically generates and checks mathematical proofs which help to prove the correctness of systems; that is, fancy math that proves things are working as expected. If you want a deeper understanding of automated reasoning, check out this re:Invent session. While the applications of this methodology are vast, in this post I’ll explore one specific aspect: analyzing policies using an internal Amazon service named Zelkova.

What is Zelkova? How will it help me?

Zelkova uses automated reasoning to analyze policies and the future consequences of policies. This includes AWS Identity and Access Management (IAM) policies, Amazon Simple Storage Service (S3) policies, and other resource policies. These policies dictate who can (or can’t) do what to which resources. Because Zelkova uses automated reasoning, you no longer need to think about what questions you need to ask about your policies. Using fancy math, as mentioned above, Zelkova will automatically derive the questions and answers you need to be asking about your policies, improving confidence in your security configuration(s).

How does it work?

Zelkova translates policies into precise mathematical language and then uses automated reasoning tools to check properties of the policies. These tools include automated reasoners called Satisfiability Modulo Theories (SMT) solvers, which use a mix of numbers, strings, regular expressions, dates, and IP addresses to prove and disprove logical formulas. Zelkova has a deep understanding of the semantics of the IAM policy language and builds upon a solid mathematical foundation. While tools like the IAM Policy Simulator let you test individual requests, Zelkova is able to use mathematics to talk about all possible requests. Other techniques guess and check, but Zelkova knows.

You may have noticed, as an example, the new “Public / Not public” checks in S3. These are powered by Zelkova:
 

Figure 1: the "public/Not public" checks in S3

Figure 1: the “Public/Not public” checks in S3

S3 uses Zelkova to check each bucket policy and warns you if an unauthorized user is able to read or write to your bucket. When a bucket is flagged as “Public”, there are some public requests that are allowed to access the bucket. However, when a bucket is flagged as “Not public”, all public requests are denied. Zelkova is able to make such statements because it has a precise mathematical representation of IAM policies. In fact, it creates a formula for each policy and proves a theorem about that formula.

Consider the following S3 bucket policy statement where my goal is to disallow a certain principal from accessing the bucket:


{
    "Effect": "Allow",
    "NotPrincipal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

Unfortunately, this policy statement does not capture my intentions. Instead, it allows access for everybody in the world who is not the given principal. This means almost everybody now has access to my bucket, including anonymous unauthorized users. Fortunately, as soon as I attach this policy, S3 flags my bucket as “Public”—warning me that there’s something wrong with the policy I wrote. How did it know?

Zelkova translates this policy into a mathematical formula:

(Resource = “arn:aws:s3:::test-bucket”) ∧ (Principal ≠ 11112222333)

Here, ∧ is the mathematical symbol for “and” which is true only when both its left and right side are true. Resource and Principal are variables just like you would use x and y in algebra class. The above formula is true exactly when my policy allows a request. The precise meaning of my policy has now been defined in the universal language of mathematics. The next step is to decide if this policy formula allows public access, but this is a hard problem. Now Zelkova really goes to work.

A counterintuitive trick sometimes used by mathematicians is to make a problem harder in order to make finding a solution easier. That is, solving a more difficult problem can sometimes lead to a simpler solution. In this case, Zelkova solves the harder problem of comparing two policies against each other to decide which is more permissive. If P1 and P2 are policy formulas, then suppose formula P1 ⇒ P2 is true. This arrow symbol is an implication that means whenever P1 is true, P2 must also be true. So, whenever policy 1 accepts a request, policy 2 must also accept the request. Thus, policy 2 is at least as permissive as policy 1. Suppose also that the converse formula P2 ⇒ P1 is not true. That means there’s a request which makes P2 true and P1 false. This request is allowed by policy 2 and is denied by policy 1. Combining all these results, policy 1 is strictly less permissive than policy 2.

How does this solve the “Public / Not public” problem? Zelkova has a special policy that allows anonymous, unauthorized users to access an S3 resource. It compares your policy against this policy. If your policy is more permissive, then Zelkova says your policy allows public access. If you restrict access—for example, based on source VPC endpoint (aws:SourceVpce) or source IP address (aws:SourceIp)—then your policy is not more permissive than the special policy, and Zelkova says your policy does not allow public access.

For all this to work, Zelkova uses SMT solvers. Using mathematical language, these tools take a formula and either prove it is true for all possible values of the variables, or they return a counterexample that makes the formula false.

To understand SMT solvers better, you can play with them yourself. For example, if asked to prove x+y > xy, an SMT solver will quickly find a counterexample such as x=5,y=-1. To fix this, you could strengthen your formula to assume that y is positive:

(y > 0) ⇒ (x + y > xy)

The SMT solver will now respond that your formula is true for all values of the variables x and y. It does this using the rules of algebra and logic. This same idea carries over into theories like strings. You can ask the SMT solver to prove the formula length(append(a,b)) > length(a) where a and b are string variables. It will find a counterexample such as a=”hello” and b=”” where b is the empty string. This time, you could fix your formula by changing from greater-than to greater-than-or-equal-to:

length(append(a, b)) ≥ length(a)

The SMT solver will now respond that the formula is true for all values of the variables a and b. Here, the solver has combined reasoning about strings (length, append) with reasoning about numbers (greater-than-or-equal-to). SMT solvers are designed for exactly this sort of theory composition.

What about my original policy? Once I see that my bucket is public, I can fix my policy using an explicit “Deny”:


{
    "Effect": "Deny"
    "Principal": { "AWS": "111122223333" },
    "Action": "*",
    "Resource": "arn:aws:s3:::test-bucket"
}

With this policy statement attached, S3 correctly reports my bucket as “Not public”. Zelkova has translated this policy into a mathematical formula, compared it against a special policy, and proved that my policy is less permissive. Fancy math has proved that things are working (or in this case, not working) as expected.

Where else is Zelkova being used?

In addition to S3, several AWS services are using Zelkova:

We have also engaged with a number of enterprise and regulated customers who have adopted Zelkova for their use cases:

“Bridgewater, like many other security-conscious AWS customers, needs to quickly reason about the security posture of our AWS infrastructure, and an integral part of that posture is IAM policies. These govern permissions on everything from individual users, to S3 buckets, KMS keys, and even VPC endpoints, among many others. Bridgewater uses Zelkova to verify and provide assurances that our policies do not allow data exfiltration, misconfigurations, and many other malicious and accidental undesirable behaviors. Zelkova allows our security experts to encode their understanding once and then mechanically apply it to any relevant policies, avoiding error-prone and slow human reviews, while at the same time providing us high confidence in the correctness and security of our IAM policies.”
Dan Peebles, Lead Cloud Security Architect at Bridgewater Associates

Summary

AWS services such as S3 use Zelkova to precisely represent policies and prove that they are secure—improving confidence in your security configurations. Zelkova can make broad statements about all resource requests because it’s based on mathematics and proofs instead of heuristics, pattern matching, or simulation. The ubiquity of policies in AWS means that the value of Zelkova and its benefits will continue to grow as it serves to protect more customers every day.

Want more AWS Security news? Follow us on Twitter.

Read the whole story
DaftDoki
25 days ago
reply
Seattle
Share this story
Delete

Amazon EC2 Update – Additional Instance Types, Nitro System, and CPU Options

1 Share

I have a backlog of EC2 updates to share with you. We’ve been releasing new features and instance types at a rapid clip and it is time to catch up. Here’s a quick peek at where we are and where we are going…

Additional Instance Types
Here’s a quick recap of the most recent EC2 instance type announcements:

Compute-Intensive – The compute-intensive C5d instances provide a 25% to 50% performance improvement over the C4 instances. They are available in 5 regions and offer up to 72 vCPUs, 144 GiB of memory, and 1.8 TB of local NVMe storage.

General Purpose – The general purpose M5d instances are also available in 5 regions. They offer up to 96 vCPUs, 384 GiB of memory, and 3.6 TB of local NVMe storage.

Bare Metal – The i3.metal instances became generally available in 5 regions a couple of weeks ago. You can run performance analysis tools that are hardware-dependent, workloads that require direct access to bare-metal infrastructure, applications that need to run in non-virtualized environments for licensing or support reasons, and container environments such as Clear Containers, while you take advantage of AWS features such as Elastic Block Store (EBS), Elastic Load Balancing, and Virtual Private Clouds. Bare metal instances with 6 TB, 9 TB, 12 TB, and more memory are in the works, all designed specifically for SAP HANA and other in-memory workloads.

Innovation and the Nitro System
The Nitro system is a rich collection of building blocks that can be assembled in many different ways, giving us the flexibility to design and rapidly deliver EC2 instance types with an ever-broadening selection of compute, storage, memory, and networking options. We will deliver new instance types more quickly than ever in the months to come, with the goal of helping you to build, migrate, and run even more types of workloads.

Local NVMe Storage – The new C5d, M5d, and bare metal EC2 instances feature our Nitro local NVMe storage building block, which is also used in the Xen-virtualized I3 and F1 instances. This building block provides direct access to high-speed local storage over a PCI interface and transparently encrypts all data using dedicated hardware. It also provides hardware-level isolation between storage devices and EC2 instances so that bare metal instances can benefit from local NVMe storage.

Nitro Security Chip – A component that is part of our AWS server designs that continuously monitors and protects hardware resources and independently verifies firmware each time a system boots.

Nitro Hypervisor – A thin, quiescent hypervisor that manages memory and CPU allocation, and delivers performance that is indistinguishable from bare metal for most workloads (Brendan Gregg of Netflix benchmarked it at less than 1%).

Networking – Hardware support for the software defined network inside of each Virtual Private Cloud (VPC), Enhanced Networking, and Elastic Network Adapter.

Elastic Block Storage – Hardware EBS processing including CPU-intensive cryptographic operations.

Moving storage, networking, and security functions to hardware has important consequences for both bare metal and virtualized instance types:

Virtualized instances can make just about all of the host’s CPU power and memory available to the guest operating systems since the hypervisor plays a greatly diminished role.

Bare metal instances have full access to the hardware, but also have the same the flexibility and feature set as virtualized EC2 instances including CloudWatch metrics, EBS, and VPC.

To learn more about the hardware and software that make up the Nitro system, watch Amazon EC2 Bare Metal Instances or C5 Instances and the Evolution of Amazon EC2 Virtualization and take a look at The Nitro Project: Next-Generation EC2 Infrastructure.

CPU Options
This feature provides you with additional control over your EC2 instances and lets you optimize your instance for a particular workload. First, you can specify the desired number of vCPUs at launch time. This allows you to control the vCPU to memory ratio for Oracle and SQL Server workloads that need high memory, storage, and I/O but perform well with a low vCPU count. As a result, you can optimize your vCPU-based licensing costs when you Bring Your Own License (BYOL). Second, you can disable Intel® Hyper-Threading Technology (Intel® HT Technology) on instances that run compute-intensive workloads. These workloads sometimes exhibit diminished performance when Intel HT is enabled. Both of these options are available when you launch an instance using the AWS Command Line Interface (CLI) or one of the AWS SDKs. You simply specify the total number of cores and the number of threads per core using values chosen from the CPU Cores and Threads per CPU Core Per Instance Type table. Here’s how you would launch an instance with 6 CPU cores and Intel® HT Technology disabled:

$ aws ec2 run-instances --image-id ami-1a2b3c4d --instance-type r4.4xlarge --cpu-options "CoreCount=6,ThreadsPerCore=1"

To learn more, read about Optimizing CPU Options.

Help Wanted
The EC2 team is always hiring! Here are a few of their open positions:

Jeff;

Read the whole story
DaftDoki
25 days ago
reply
Seattle
Share this story
Delete

Former Amazon devices executive Ian Freed launches Bamboo Music skill for Alexa

1 Share
Bamboo Learning CEO Ian Freed. (Bamboo Learning Photo)

Kids and their parents are getting a new way to use Amazon Echo devices for learning about music. And it’s coming as an Alexa skill from the person who once headed Amazon’s Echo business.

Ian Freed, who left Amazon last year, today officially launched startup Bamboo Learning and its first Alexa skill, Bamboo Music. The skill allows families to learn music theory interactively with Alexa-enabled devices. After saying “Alexa, enable Bamboo Music” or selecting the skill through the Amazon Alexa app, children listen to musical selections and are introduced to music concepts such as notes, scales, chords, tempo, dynamics, and intervals. In addition, they learn to identify the sounds of musical instruments.

Individuals in a family can register for different accounts, and parents can optionally receive emails reporting progress which can be provided to music teachers. As skills are mastered, Bamboo Music automatically advances to the next difficulty level.

Freed headed Amazon’s businesses responsible for Kindle and then Echo devices between 2006 and 2015, overseeing products including Amazon’s Fire Phone. He headed up restaurant delivery for the company before leaving last year. He co-founded Seattle-based Bamboo Learning with Irina Fine, a 30-year veteran of elementary education curriculum development and teaching. Freed serves as CEO and Fine as COO and senior vice president of content.

Bamboo Music was created, in part, out of personal interest.

“I am one of those students who tried music long ago,” Freed told GeekWire. “I’ve tried it again as an adult, and I found one of the gaps for me was understanding how music fits together.” He described Bamboo Music as providing that understanding, and filling a need on the Alexa platform. “I did also feel that Alexa was missing longer, interactive kind of skills,” he said. “With this one, you’re going back and forth with Alexa, minimum of eight questions, you’re learning as you along, and we record your progress with badges.”

Fine said Bamboo Music could work for children as young as five, because it doesn’t require reading, typically a requirement to learn music theory. “Parents are searching for educational skills for Alexa, because so many people have Alexa in their homes now, and parents are trying to get their kids away from the screens and more interacting with the voice devices,” she said.

Bamboo Learning has videos showing both a 6- and an 8-year-old girl trying Bamboo Music.

The Bamboo Music skill is free, and Freed said they are taking a wait-and-see approach about any subscription features as they attract an audience, adding that there “may be ways to earn revenue for the company without charging consumers.”

For now, Freed says Bamboo Learning is a “very lean organization,” self-funded, with fewer than five employees as well as contractors.

“We’re quite excited about Bamboo Music because we think it offers something that is high quality, informative, and interactive for consumers,” Freed said of the startup’s first educational skill. “If we see other opportunities in other subject areas,” he said, they may create other skills.

Bamboo Music lets kids and adults earn badges for progress. (Bamboo Learning Image)

As to whether there will be a version of Bamboo Music specifically for K-12 schools, Freed said they may look at that opportunity if they can get a significant number of teachers interested, but Freed and Fine said the current version should work for consumers and music educators alike.

Interest in using Echo and other smart speakers for children appears to be growing. After taking part in the Alexa Accelerator, Seattle startup Novel Effect recently raised $3 million for its app that uses voice recognition technology to add sound effects and music to books as “soundscapes” during storytelling. And earlier this year, Amazon released the $80 Echo Dot Kids Edition.

Still, it’ll take a lot of work for Bamboo Music to stand out in a crowd of many thousands of Alexa skills. The startup is counting on targeted marketing, and word of mouth from both music students and music teachers.

“There are several tens of millions of music students and prospective music students,” Freed said, “And I think parents get quite excited about having their children use music.”

Read the whole story
DaftDoki
27 days ago
reply
Seattle
snm77
26 days ago
Totally just enabled this after reading the first paragraph...
Share this story
Delete

pfSense Now Available To All QNAP Virtualization Station Users

1 Comment

A few months back, QNAP approached Netgate® with a request. A growing number of Virtualization Station users wanted more security for their “data center in a box” deployments. QNAP wanted a proven firewall and IDS/IPS solution for their customers and specifically wanted pfSense®. Today, it’s available directly from within the QNAP Virtualization Station store.

Read the whole story
DaftDoki
35 days ago
reply
thats cool
Seattle
snm77
34 days ago
except that Netgate will be sunsetting pfsense in the "near" future to focus on their new firewall... https://www.netgate.com/products/tnsr/
Share this story
Delete

Clouds under the sea: Microsoft deploys its Project Natick data center off the coast of Scotland

1 Comment
Spencer Fowers, senior member of technical staff for Microsoft’s special projects research group, prepares Project Natick’s Northern Isles datacenter for deployment off the coast of the Orkney Islands in Scotland. The datacenter is secured to a ballast-filled triangular base that rests on the seafloor. (Photo and caption courtesy Microsoft / Scott Eklund/Red Box Pictures)

Microsoft kicked off the second phase of its experimental underwater data center project Wednesday, submerging a shipping-container sized data center with 864 servers near the Orkney Islands in Scotland.

Back in 2016, Microsoft first tested its prototype underwater data center designs off the coast of California, hoping to prove the feasibility of a relatively portable data center design that could be placed near population centers as needed. This week a group of researchers deployed the first working production data center 117 feet below the surface of the sea, where it is designed to work without the need for maintenance for five years.

Engineers slide racks of Microsoft servers and associated cooling system infrastructure into Project Natick’s Northern Isles datacenter at a Naval Group facility in Brest, France. The datacenter has about the same dimensions as a 40-foot long ISO shipping container seen on ships, trains and trucks. (Photo and caption courtesy Microsoft / Frank Betermin.)

This particular data center is a fraction of the size of the modern data centers that power cloud computing operations like Microsoft Azure, but its portability and reliance on cold ocean water to keep the systems humming along make it very interesting. Cooling the servers inside a modern data center is almost as expensive as buying the equipment itself, and a networked sequence of underwater data centers could provide computing power to places around the world where environmental conditions make land-based data centers impractical.

Project Natick gets electrical power from a cable connected to a wind farm on the Orkney Islands, and that cable also serves as the conduit for the data processed under the sea. Eventually, Microsoft would like to marry Project Natick to experimental ocean turbines that use wave energy to generate electricity, which could make these data centers entirely self sufficient.

Project Natick’s Northern Isles datacenter is partially submerged and cradled by winches and cranes between the pontoons of an industrial catamaran-like gantry barge. At the deployment site, a cable containing fiber optic and power wiring was attached to the Microsoft datacenter, and then the datacenter and cable lowered foot-by-foot 117 feet to the seafloor. (Photo and caption courtesy Microsoft / Scott Eklund/Red Box Pictures.)

There’s still a lot of research that needs to be done to make sure these designs are environmentally sustainable and reliable, and Microsoft will closely monitor the performance and environmental impact of this data center over the next year. The North Sea is a rather unforgiving body of water, with frequent storms and strong currents, and Microsoft believes that if Project Natick can work here, it can work in an awful lot of places around the globe.

And that could be an important step in the evolution of cloud computing. Real-time mobile and web applications are increasingly hamstrung by the speed of light; the data centers they rely on for computing power can often be too far away to avoid significant latency problems. If Microsoft could deploy dozens of these data centers off the coast of a heavily populated area like New York or Tokyo without boiling the ocean, it could provide computing capacity much closer to its end users.

Read the whole story
DaftDoki
40 days ago
reply
this is cool
Seattle
Share this story
Delete
Next Page of Stories