Tag Archives: Cloud

DevOps Tip: Design for Failure

We have had some interesting  internal discussions lately about application reliability.  It’s probably not a surprise to many of you that the cloud is unreliable, on a small scale that is.  Sure, on the large scale you use the cloud to make highly resilient environments. But a certain percentage of calls to the cloud fail – whether it’s Amazon’s or Azure’s management APIs, or hitting Amazon or Azure storage, or going through an Amazon ELB, or hitting SQL Azure. Heck, on Azure they plainly state that they will pull your instances out from under you, restart them, and move them to other hardware without notice. If you’re running 2 or more, they won’t do them all at the same time – so again, you get large scale resilience but at the cost of some small scale unreliability.

The problem is, that people sometimes come from the assumption that their application is always working fine, unless you can prove otherwise. This is fundamentally the wrong assumption. You have to assume your application has problems, unless you can prove it doesn’t.  This changes your approach to testing, logging, and monitoring profoundly.

Take the all too common example of an app with intermittent failures. Let’s say it’s as bad as 1 in 20 times.  1 in 20 times a customer hits your application, it fails somehow. It is very likely you don’t know this. Because by default, you don’t know it. How would you? Well, obviously, by monitoring, logging, and testing. I’ll follow this up with a series of posts describing how and why those often fail to detect problems. The short form is that “ha ha, no they don’t.”

Here’s a bad story I’ll tell on myself.  Here at NI, we rolled out a PDF instant quote generation widget.  We have over 250 apps on ni.com, so we don’t put synthetic monitors on all of them (remind me to tell you about the time early at NI that I discovered synthetic monitoring was producing 30% of our site load). Apparently the logging wasn’t all that good either, it didn’t trigger any of our log monitoring heuristics. Anyway, come to find out later on that the app was failing in production about 75% of the time. This is an application on a “monitored” site, where a developer and a tester signed off on the app. Whoops.  If you do a cursory test and assume it’ll work – well you know what they say about assumptions – they make an ass out of “you” and “mption.” 🙂

Anyway, to me part of the good part about the cloud is that they come out and say “we’re going to fail 2-5% of the time, code for it.” Because before the cloud, there were failures all the time too, but people managed to delude themselves into thinking there weren’t; that an application (even a complex Internet-based application) should just work, hit after hit, day after day, on into the future. So by having handling failure built in – like a lesser version of the Chaos Monkey – you’re not really just making your app cloud friendly, you’re making it better.

Real engineers who make cars and whatnot know better. That’s why there was a big ol’ maintenance hatch on the side of the Hubble Space Telescope; if any of you have watched the Hubble 3D IMAX film you get to see them performing maintenance on it.  If a billion dollar telescope in fricking space has problems and needs to be maintainable, so does your little Web app.

But I see so many apps that don’t really take failure into account.  Oh, maybe they retry some connections if they fail. But what if you get to the end of your retries? What if the response you get back is an unexpected HTTP code or unexpected payload? You’d think in the age of try/catch and easily integrated logging frameworks you wouldn’t see this any more, but I see it all the time. It’s a combination of not realizing that failure is ubiquitous, and not thinking about the impact (especially the customer facing impact) of that failure.

This is one of the (many) great DevOps learning experiences – Ops helping teach Devs all the things that can go wrong that don’t really go wrong much in a “frictionless” lab environment.  “So, what do you do if your hard drive is suddenly not there?” (Common with Amazon EBS failures.)  “What do you do if you took data off that queue and then your instance restarts before you put it into the database?” (Hopefully a transaction.) “What do you do if you can’t make that network connection, are you retrying every 5 ms and then filling up the system’s TCP connections?” (True story.) “Hey, I’m sure your app is pure as the driven snow right now, but is it always going to work the same when the PaaS vendor changes the OS version under you?”

In all circumstances, you should

  • Plan for failure (understand failure modes, retry, design for it)
  • Detect failure (monitor, log, etc.)
  • Plan for and detect failure of your schemes to plan for and detect failure!

We do some security threat modeling here. I wonder if there’s not a lightweight methodology like that which could be readily adapted for reliability modeling of apps.  Seems like something someone would have done… But a simple one, not like lame complicated risk matrices. I’ll have to research that.

Leave a comment

Filed under DevOps

How We Do Cloud and DevOps: The Motion Picture

Our good friend Damon Edwards from dev2ops came by our Austin office and recorded a video of Peco and I explaining how we do what we do! Peco never blogs, so this is a rare opportunity to hear him talk about these topics, and he’s full of great sound bytes. 🙂

I apologize in advance for how much I say “right.”

Leave a comment

Filed under Cloud, DevOps

What Is Cloud Computing?

My recent post on how sick I am of people being confused by the basic concept of cloud computing quickly brought out the comments on “what cloud is” and “what cloud is not.” And the truth is, it’s a little messy, there’s not a clear definition, especially across “the three aaSes“. So now let’s have a post for the advanced students. Chip in with your thoughts!

Here’s my Grand Unified Theory of Cloud Computing. Rather than being a legalistic definition that will always be wrong for some instances of cloud, it attempts to convey the history and related concepts that inform the cloud.

The Grand Unified Theory of Cloud Computing

( ISP -> colo -> MSP ) + virtualization + HPC + (AJAX + SOAP -> REST APIs) = IaaS
(( web site -> web app -> ASP ) + virtualization + fast ubiquitous Internet + [ RIA browsers & mobile ] = SaaS
( IDEs & 4GLs ) + ( EAI -> SOA ) +  SaaS + IaaS = PaaS
[ IaaS | PaaS | SaaS ] + [ devops | open source | noSQL ] = cloud

* Note, I don’t agree with all those Wikipedia definitions, they are only linked to clue in people unsure about a given term

Sure, that’s where the cloud comes from, but “what is the cloud?” Well, here’s my thoughts, the Seven Pillars of Cloud Computing.  Having more of these makes something “more cloudy” and having fewer makes something “less cloudy.” Arguments over whether some specific offering “is cloud” or not, however, is for people without sufficiently challenging jobs.

The Seven Pillars of Cloud Computing

“The Cloud” may be characterized as:

  • An outsourced managed service
  • providing hosted computing or functionality
  • delivered over the Internet
  • offering extreme scalability
  • by using dynamically provisioned, multitenant, virtualized systems, storage, and applications
  • controlled via REST APIs
  • and billed in a utility manner.

You can remove one or more of these pillars to form most of the things people sell you as “private cloud,” for example, losing specific cloud benefits in exchange for other concerns.

Now there’s also the new vs old argument. There’s the technohipsters that say “Cloud is nothing new, I was doing that back in the ’90’s.” And some of that is true, but only in the most uninteresting way. The old and the new have, via alchemy, begun to help users realize benefits beyond what they did before.

Benefits of Cloud – What and How

Not New:

  • Virtualization
  • Outsourcing
  • Integration
  • Intertubes
Pretty New:

  • Multitenancy
  • Massively scalable
  • Elastic self provisioning
  • Pay as you go
Resulting Benefits:

  • agility
  • economy of scale
  • low initial investment
  • scalable cost
  • resilience
  • improved service delivery
  • universal access

Okay, Clouderati – what do you think?

2 Comments

Filed under Cloud

Mystifying Cloud Computing

I have now received my 200th email entitled “Demystifying Cloud Computing.” This one is from InfoWorld, but I have gotten them from just about every media outlet there is. This has got to stop.

People, it is not a mystery any more!  If you are still “mystified” by cloud computing, you probably need to consider an alternate line of work that does not generate new ideas at the aggressive rate of one every decade.

Let’s get this over with.

Q: What is cloud computing?

A: It is other people taking care of shit for you on the Web.
Maybe it’s running a data center, maybe it’s storing your files, maybe it’s running your ERP system or email system.  It’s, like, stuff you would do, but you are paying someone else to do it better instead, on demand.
Maybe there’s “scaling,” or “utility billing,” or “REST APIs” involved, maybe not. Ask a grownup.

There, consider yourself demystified.  You may now go get some of the green paper out of your parents’ wallet and mail it to me.

6 Comments

Filed under Cloud

Report from NIWeek

Hey all, sorry it’s been quiet around here – Peco and I took our families on vacation to Bulgaria!  Plus, we’ve been busy in the run-up to our company convention, NIWeek. I imagine most of the Web type folks out there don’t know about NIWeek, but it’s where scientists and engineers who use our products come to learn. It’s always awesome to see the technology innovation going on out there, from the Stormchasers getting data on tornadoes and lightning that no one ever has before, to high school kids solving real problems.

There were a couple things that are really worth checking out.  The first is the demo David Fuller did of NI’s system designer prototype (you can skip ahead to 5:00 in if you want to) . Though the examples he is using is of engineering type systems, you can easily imagine using that same interface for designing Web systems – no ‘separate Visio diagram’ BS any more. Imagine every level from architectural diagram to physical system representation to the real running code all being part of one integrated drill-down. It looks SUPER SWEET. Seems like science fiction to those of us IT-types.

A quick guide to the demo – so first a Xilinx guy talks about their new ARM-based chip, and then David shows drill-up and down to the real hardware parts of a system.  NI now has the “traditional systems” problem in that people buy hardware, buy software, and are turning it into large distributed scalable architectures.  Not being hobbled by preconceptions of how that should be done, our system diagram team has come up with a sweet visualization where you can swap between architecture view (8:30 in), actual pictures and specs of hardware, then down (10:40 in) into the “implementation” box-and-line system and network diagram, and then down into the code (12:00 in for VHDL and 13:20 in for LabVIEW). LabVIEW code is natively graphical, so in the final drilldown he also shows programming using drawing/gestures.

Why have twenty years of “systems management” and design tools from IBM/HP/etc not given us anything near this awesome for other systems?  I don’t know, but it’s high time. We led a session at DevOpsDays about diagramming systems, and “I make a Visio on the side” is state of the art.  There was one guy who took the time to make awesome UML models, but real integration of design/diagram to real system doesn’t exist. And it needs to. And not in some labor intensive  “How about UML oh Lord I pooped myself” kind of way, but an easy and integral part of building the system.

I am really enjoying working in the joint engineering/IT world.  There’s some things IT technology has figured out that engineering technology is just starting to bumble into (security, for example, and Web services). But there are a lot of things that engineering does that IT efforts look like the work of a bumbling child next to. Like instrumentation and monitoring, the IT state of the art is vomitous when placed next to real engineering data metric gathering (and analysis, and visualization) techniques.  Will system design also be revolutionized from that quarter?

The other cool takeaway was how cloud is gaining some foothold in the engineering space.  I was impressed as hell with Maintainable Software, the only proper Web 3.0 company in attendance. Awesome SaaS product, and I talked with the guys for a long time and they are doing all the cool DevOps stuff – automated provisioning, continuous deployment, feature knobs, all that Etsy/Facebook kind of whizbang shit. They’re like what I want our team here to become, and it was great meeting someone in our space who is doing all that – I love goofy social media apps or whatever but it can sometimes be hard to convey the appropriateness of some of those practices to our sector. “If it’s good enough to sell hand knitted tea cozies or try to hook up with old high school sweethearts, then certainly it’s good enough to use in your attempt to cure cancer!” anyway, Mike and Derek were great guys and it was nice to see that new kind of thinking making inroads into our sometimes too-traditional space.

Leave a comment

Filed under Conferences, General

Moving Your Amazon EC2 AMIs To Another Region

Here’s what I found out the hard way during the Amazon outage when I wanted to migrate my systems to a different region.

You can’t use your AMIs or other assets in a region different from the ones they were created in. You can’t use your security groups or keypairs or EBSes, you have to migrate or recreate all of them, yay. Some methods to do this follow.

Manual method:

Automated:

Paying Money:

Of course what the answer should be is “click a button in the Amazon console or invoke the Amazon API to say “move ami-xxxxx to region y” done.

In the end none of these were working during the outage because they all rely on the ability to bring up an instance/EBS in the source region. We then rebuilt images from scratch in us-west but looks like east is back online now just as we’re finishing with that. So plan ahead!

3 Comments

Filed under Cloud

Our Cloud Products And How We Did It

Hey, I’m not a sales guy, and none of us spend a lot of time on this blog pimping our company’s products, but we’re pretty proud of our work on them and I figured I’d toss them out there as use cases of what an enterprise can do in terms of cloud products if they get their act together!

Some background.  Currently all the agile admins (myself, Peco, and James) work together in R&D at National Instruments.  It’s funny, we used to work together on the Web Systems team that ran the ni.com Web site, but then people went their own ways to different teams or even different companies. Then we decided to put the dream team back together to run our new SaaS products.

About NI

Some background.  National Instruments (hereafter, NI) is a 5000+ person global company that makes hardware and software for test & measurement, industrial control, and graphical system design. Real Poindextery engineering stuff. Wireless sensors and data acquisition, embedded and real-time, simulation and modeling. Our stuff is used to program the Lego Mindstorms NXT robots as well as control CERN’s Large Hadron Collider. When a crazed highlander whacks a test dummy on Deadliest Warrior and Max the techie looks at readouts of the forces generated, we are there.

About LabVIEW

Our main software product is LabVIEW.  Despite being an electrical engineer by degree, we never used LabVIEW in school (this was a very long time ago, I’ll note, most programs use it nowadays), so it wasn’t till I joined NI I saw it in action. It’s a graphical dataflow programming language. I assumed that was BS when I heard it. I had so many companies try to sell be “graphical” programming over the years, like all those crappy 4GLs back in the ‘9o’s, that I figured that was just an unachieved myth. But no, it’s a real visual programming language that’s worked like a champ for more than 20 years. In certain ways it’s very bad ass, it does parallelism for you and can be compiled and dropped onto a FPGA. It’s remained niche-ey and hasn’t been widely adopted outside the engineering world, however, due to company focus more than anything else.

Anyway, we decided it was high time we started leveraging cloud technologies in our products, so we created a DevOps team here in NI’s LabVIEW R&D department with a bunch of people that know what they’re doing, and started cranking on some SaaS products for our customers! We’ve delivered two and have announced a third that’s in progress.

Cloud Product #1: LabVIEW Web UI Builder

First out of the gate – LabVIEW Web UI Builder. It went 1.0 late last year. Go try it for free! It’s a Silverlight-based RIA “light” version of LabVIEW – you can visually program, interface with hardware and/or Web services. As internal demos we even had people write things like “Duck Hunt” and “Frogger” in it – it’s like Flash programming but way less of a pain in the ass. You can run in browser or out of browser and save your apps to the cloud or to your local box. It’s a “freemium” model – totally free to code and run your apps, but you have to pay for a license to compile your apps for deployment somewhere else – and that somewhere else can be a Web server like Apache or IIS, or it can be an embedded hardware target like a sensor node. The RIA approach means the UI can be placed on a very low footprint target because it runs in the browser, it just has to get data/interface with the control API of whatever it’s on.

It’s pretty snazzy. If you are curious about “graphical programming” and think it is probably BS, give it a spin for a couple minutes and see what you can do without all that “typing.”

A different R&D team wrote the Silverlight code, we wrote the back end Web services, did the cloud infrastructure, ops support structure, authentication, security, etc. It runs on Amazon Web Services.

Cloud Product #2: LabVIEW FPGA Compile Cloud

This one’s still in beta, but it’s basically ready to roll. For non-engineers, a FPGA (field programmable gate array) is essentially a rewritable chip. You get the speed benefits of being on hardware – not as fast as an ASIC but way faster than running code on a general purpose computer – as well as being able to change the software later.

We have a version of LabVIEW, LabVIEW FPGA, used to target LabVIEW programs to an FPGA chip. Compilation of these programs can take a long time, usually a number of hours for complex designs. Furthermore the software required for the compilation is large and getting more diverse as there’s more and more chips out there (each pretty much has its own dedicated compiler).

So, cloud to the rescue. The FPGA Compile Cloud is a simple concept – when you hit ‘compile’ it just outsources the compile to a bunch of servers in the cloud instead of locking up your workstation for hours (assuming you’ve bought a subscription).  FPGA compilations have everything they need with them, there’s not unique compile environments to set up or anything, so it’s very commoditizable.

The back end for this isn’t as simple as the one for UI Builder, which is just cloud storage and load balanced compile servers – we had to implement custom scaling for the large and expensive compile workers, and it required more extensive monitoring, performance, and security work. It’s running on Amazon too. We got to reuse a large amount of the infrastructure we put in place for systems management and authentication for UI Builder.

Cloud Product #3: Technical Data Cloud

It’s still in development, but we’ve announced it so I get to talk about it! The idea behind the Technical Data Cloud is that more and more people need to collect sensor data, but they don’t want to fool with the management of it. They want to plop some sensors down and have the acquired data “go to the cloud!” for storage, visualization, and later analysis. There are other folks doing this already, like the very cool Pachube (pronounced “patch-bay”, there’s a LabVIEW library for talking to it), and it seems everyone wants to take their sensors to the cloud, so we’re looking at making one that’s industrial strength.

For this one we are pulling our our big guns, our data specialist team in Aachen, Germany. We are also being careful to develop it in an open way – the primary interface will be RESTful HTTP Web services, though LabVIEW APIs and hardware links will of course be a priority.

This one had a big technical twist for us – we’re implementing it on Microsoft Windows Azure, the MS guys’ cloud offering. Our org is doing a lot of .NET development and finding a lot of strategic alignment with Microsoft, so we thought we’d kick the tires on their cloud. I’m an old Linux/open source bigot and to be honest I didn’t expect it to make the grade, but once we got up to speed on it I found it was a pretty good bit of implementation. It did mean we had to do significant expansion of our underlying platform we are reusing for all these products – just supporting Linux and Windows instance in Amazon already made us toss a lot of insufficiently open solutions in the garbage bin, and these two cloud worlds are very different as well.

How We Did It

I find nothing more instructive than finding out the details – organizational, technical, etc. – of how people really implement solutions in their own shops.  So in the interests of openness and helping out others, I’m going to do a series on how we did it!  I figure it’ll be in about three parts, most likely:

  • How We Did It: People
  • How We Did It: Process
  • How We Did It: Tools and Technologies

If there’s something you want to hear about when I cover these areas, just ask in the comments!  I can’t share everything, especially for unreleased products, but promise to be as open as I can without someone from Legal coming down here and Tasering me.

5 Comments

Filed under Cloud, DevOps

Why Amazon Reserve Instances Torment Me

We’ve been using over 100 Amazon EC2 instances for a year now, but I’ve just now made my first reserve instance purchase. For the untutored, reserve instances are where you pay a yearly upfront per instance and you get a much, much lower hourly cost. On its face, it’s a good deal – take a normal Large instance you’d use for a database.  For a Linux one, it’s $0.34 per hour.  Or you can pay $910 up front for the year, and then it’s only $0.12 per hour. So theoretically, it takes your yearly cost from $2978.40 to $1961.2.  A great deal right?

Well, not so much. The devil is in the details.

First of all, you have to make sure and be running all those instances all the time.  If you buy a reserve instance and then don’t use it some of the time, you immediately start cutting into your savings.  The crossover is at 172 days – if you don’t run the instance at least 172 days out of the year then you are going upside down on the deal.

But what’s the big deal, you ask?  Sure, in the cloud you are probably (and should be!) scaling up and down all the time, but as long as you reserve up to your low water mark it should work out, right?

So the big second problem is that when you reserve instances, you have to specify everything about that instance.  You aren’t reserving “10 instances”, or even “10 large instances” – you have to specify:

  • Platform (UNIX/Linux, UNIX/Linux VPC, SUSE Linux, Windows, Windows VPC, or Windows with SQL Server)
  • Instance Type (m1.small, etc.)
  • AZ (e.g. us-east-1b)

And tenancy and term. So you have to reserve “a small multitenant Linux instance in us-east-1b for one year.” But having to specify down to this level is really problematic in any kind of dynamic environment.

Let’s say you buy 10 m1.large instances for your databases, and then you realize later you really need to move up to an m1.xlarge.  Well, tough. You can, but if you don’t have 10 other things to run on those larges, you lose money. Or if you decide to change OS.  One of our biggest expenditures is our compile farm workers, and on those we hope to move from Windows to Linux once we get the software issues worked out, and we’re experimenting with best cost/performance on different instance sizes. I’m effectively blocked from buying reserve for those, since if I do it’ll put a stop to our ability to innovate.

And more subtly, let’s say you’re doing dynamic scaling and splitting across AZs like they always say you should do for availability purposes.  Well, if I’m running 20 instances, and scaling them across 1b and 1c, I am not guaranteed I’m always running 10 in 1b and 10 in 1c, it’s more random than that.  Instead of buying 20 reserve, you instead have to buy say 7 in 1b and 7 in 1c, to make sure you don’t end up losing money.

Heck, they even differentiate between Linux and Suse and Linux VPC instances, which clearly crosses over into annoyingly picky territory.

As a result of all this, it is pretty undesirable to buy reserve instances unless you have a very stable environment, both technically and scale-wise. That sentence doesn’t describe the typical cloud use case in my opinion.

I understand, obviously, why they are doing this.  From a capacity planning standpoint, it’s best for them if they make you specify everything. But what I don’t think they understand is that this cuts into people willing to buy reserve, and reserve is not only upfront money but also a lockin period, which should be grotesquely attractive to a business. I put off buying reserve for a year because of this, and even now that I’ve done it I’m not buying near as many reserve as I could be because I have to hedge my bets against ANY changes to my service. It seems to me that this also degrades the alleged point of reserves, which is capacity planning – if you’re so picky about it that no one buys reserve and 95% of your instances are on demand, then you can’t plan real well can you?

What Amazon needs to do is meet customers halfway.  It’s all a probabilities game anyway. They lose specificity of each given reserve request, but get many more reserve requests (and all the benefits they convey – money, lockin, capacity planning info) in return.

Let’s look at each axis of inflexibility and analyze it.

  • Size.  Sure, they have to allocate machines, right?  But I assume they understand they are using this thing called “virtualization.”  If I want to trade in 20 reserved small instances for 5 large instances (each large is 4x a small), why not?  It loses them nothing to allow this. They just have to make the effort to allow it to happen in their console/APIS. I can understand needing to reserve a certain number of “units” but those should be flexible on exact instance types at a given time.
  • OS. Why on God’s green earth do I need to specify OS?  Again, virtualized right? Is it so they can buy enough Windows licenses from Microsoft?  Cry me a river.  This one needs to leave immediately and never come back.
  • AZ. This is annoying from the user POV but probably the most necessary from the Amazon POV because they have to put enough hardware in each data center, right?  I do think they should try to make this a per region and not a per AZ limit, so I’m just reserving “us-east” in Virginia and not the specific AZ, that would accommodate all my use cases.

In the end, one of the reasons people move to the cloud in the first place is to get rid of the constraints of hardware.  When Amazon just puts those constraints back in place, it becomes undesirable. Frankly even now, I tried to just pay Amazon up front rather than actually buy reserve, but they’re not really enterprise friendly yet from a finance point of view so I couldn’t make that happen, so in the end I reluctantly bought reserve.  The analytics around it are lacking too – I can’t look in the Amazon console and see “Yes, you’re using all 9 of your large/linux/us-east-1b instances.”

Amazon keeps innovating greatly in the technology space but in terms of the customer interaction space, they need a lot of help – they’re not the only game in town, and people with less technically sophisticated but more aggressive customer services/support options will erode their market share. I can’t help that Rackspace’s play with backing OpenStack seems to be the purest example of this – “Anyone can run the same cloud we do, but we are selling our ‘fanatical support'” is the message.

8 Comments

Filed under Cloud

DevOps at CloudCamp at SXSWi!

Isn’t that some 3l33t jargon.  Anyway, Dave Nielsen is holding a CloudCamp here in Austin during SXSW Interactive on Monday, March 14 (followed by the Cloudy Awards on March 15) and John Willis of Opscode and I reached out to him, and he has generously offered to let us have a DevOps meetup in conjunction.

John’s putting together the details but is also traveling, so this is going to be kind of emergent.  If you’re in Austin (normally or just for SXSW) and are interested in cloud, DevOps, etc., come on out to the CloudCamp (it’s free and no SXSW badge required) and also participate in the DevOps meetup!

In fact, if anyone’s interested in doing short presentations or show-and-tells or whatnot, please ping John (@botchagalupe) or me (@ernestmueller)!

Leave a comment

Filed under Cloud, Conferences, DevOps

SXSW Interactive Is Here!

SXSW Interactive is going on in Austin tomorrow through next Tuesday, and loads of great cloud and DevOps folks will be in town for it. Looking forward to talking with @cote, @lennysan, @botchgalupe, @davenielsen, @ehuddleston, and many more.

Here’s what I think the best cloud/devops/high tech related tickets are, let me know what I’m missing! A lot of the off premise events don’t even require badges and are mostly free.

Random Off Premise SXSW Interactive Stuff

Other events – not job related but make me happy:

Sessions

Sessions are less important than the other stuff so they’re on here second!  No time for links, search on ’em.

What’s the good stuff I haven’t mentioned?  DevOps, Cloud, noSQL, and other cool stuff report below!

Leave a comment

Filed under Conferences