Tag Archives: Q&A

Record and Replay Browser Testing, Take 2

Recently I reported, and I quote, “Bah!” from trying a bunch of record and playback cross browser testing options. To recap, we’re a startup, our devs are writing UI tests but we don’t have cross-browser testing, so I tried to find something where I could record and replay our pretty simple Angular UI flow and get some cross-browser testing on it without needing code. And I didn’t find anything that worked. But I got a bit more traction after that first run at it, so here’s part 2.

EndTest

The EndTest support folks looked into it and fixed my test so it worked. Some were alternate locator schemes.  Some were advanced options I am not sure how I would have figured out (to close those pesky multiselect dropdowns, you can’t just click on the overlay backdrop that comes up, you have to offset by some pixels…).  I am not a front end guy, I just fake it, so this is a little daunting, but their support seems able to help with tests so it’s doable.

I now have a working test, somewhat generated from the recording and somewhat generated by programming. The trial doesn’t allow other browsers by default but they turned them on for me.  With some light fiddling I got them all running, and the only issue was IE not running because it didn’t like that pixel-offset workaround from the above and EndTest fixed it by changing my test to hit “enter” instead – fair enough.

I then also made a test for the second part of our flow that has a PDF that needs validation, you can do it with a screenshot comparison.

So after help from their support, I have a working test suite – it’s a stretch to say it’s pure record and replay; it’s record, edit the locators a bunch, and replay, but since my last iteration on this was “none of these solutions work and do crossbrowser”, that’s a big win.

Sauce Labs

Last time I had been trying to use the Selenium IDE for record/replay and integrate with Sauce for the crossbrowser testing. They got back to me and extended my trial minutes and gave me some tips on how to get some of the tests working.  They don’t support the Selenium IDE however so they don’t really help with the tests per se.

I had a weird experience, there were intermittent (but frequent) timeouts from running tests against Sauce with selenium-side-runner, just at some point 0-4 minutes into the test it’d hang and time out and give me a “ETIMEDOUT connect ETIMEDOUT 66.85.49.22:443″.  

And then I got into the lovely rich set of test things that don’t work the same across the browsers.  Oh, in Edge “the element isn’t clickable because it’s obscured.”  In Firefox on Mac “that radio button isn’t clickable because the inner part of the radio button obscures it.” All different elements.  The problem is, fiddling with the locators to get something that browser likes breaks it in the other browsers.  But the app works in those browsers, just not the tests.

Anyway, Sauce support said “we can’t support Selenium IDE questions” so I guess that’s it.  (I don’t know that those timeouts when running tests on their service count as selenium IDE questions but whatever). I had hope when I found Selenium IDE that a record and replay with Sauce was feasible but it seems like it’s a starting point to seed your own Selenium code at best.

mabl

A sales rep from mabl wouldn’t let go of me till I tried their solution. So I did and it has a lot of promise.  It tries a bunch of different methods to find a locator automatically and self-heals the tests, which is great.  On the one hand the tests are slow; a 4 minute selenium test is a 6-8 minute mabl test. On the other hand it works!

It worked the first time in fact (Well… second, but it was because I went into a click frenzy trying to get the mabl trainer window and our Hubspot popup out of my way). I now have a working mabl test, though it is having one-minute timeouts and confusion on that same “click on the backdrop to get out of the multiselect dropdown” issue that’s a pain in all the other solutions as well, it does it but after a long timeout and it doesn’t self heal it for the next test run so it works but is slow. I hate multiselect dropdowns. Anyway, after a discussion with mabl support it turns out that it tries a bunch of ways to find a locator and then self heals if it found a better one, and then tries a bunch of ways to click/exercise that locator but doesn’t self heal from that hence the long timeout in my test.  I gave them the feedback “self heal that too, yo!”

OK so it’s working, how about cross-browser?  I add Firefox – it works.  IE and Edge are only on the “enterprise plan” but I ask them to add them to the trial, they do, I run them, and…  They work!  Safari… Well, a problem there, but mabl looks at it and thinks it may be them. I’m super impressed, of all the stuff I’ve tried only GhostInspector actually recorded and replayed without significant recoding and that was only Chrome/Firefox.  They’re still working on a fix as of “press time.” They also do PDF testing, which we need..

So it works great and looks good!  And they have a lot of cool dev integration and stuff to get into, it uses a branching model for the tests, you can run the tests locally…  Great looking ecosystem. You can’t choose like OS platform though, the Chrome and Firefox are just on Linux. At our current level of detail that’s fine.

Next step is discussing pricing (Web site just says “contact us”)… And unfortunately it’s way, way out of reach of a 10-person startup. The cross-browser and PDF testing are part of the “Enterprise plan” but even if I sweet talk them into putting those features in their lowest cost plan it’s still cost prohibitive for us while we’re in seed round.  I mean, it’s totally worth it because it works out of the box without fiddling around, if I were still at AT&T Cybersecurity I’d make someone spring for it for sure, but it’s an extremely significant pricing step above these other solutions and not at the value point for where we are right now.  Dang.

New Bottom Line

OK so my new bottom line is this.

  1. If you just want quickie Chrome/Firefox on Linux record and replay, GhostInspector works out of the box. Nice but I want more browsers, doesn’t quite fit my needs.
  2. If you want record and replay on various OS/browser combinations and are willing to do some re-coding and testing of it to make it work, EndTest does it. Fits my need with a little pain, and is affordable.
  3. If you want cross-browser record and replay without hand coding, without full platform choices but with a bunch of cool dev friendly extras, mabl does a great job – for a lot more money than the other options.  Best for my needs, but most expensive.
  4. The other options basically don’t work on Angular (Crossbrowsertesting), or possibly work but with intensive time investment that makes it not really record and replay IMO (Sauce + Selenium IDE).
  5. Though if you don’t need a bunch of CI-driven executions per day and don’t care about all the platforms you can probably just use the Selenium IDE to do 3 major browsers locally installed on whatever laptop you have yourself for free (Safari/Chrome/Firefox on Mac or Chrome/Firefox/Whatever Microsoft Is Pushing Today on Windows). Free but DIY.

So for us being on a seed round budget, EndTest is probably the best compromise of functionality and price at this point, YMMV of course.

1 Comment

Filed under DevOps

Trying To Record And Replay Browser Tests… FML

I’m working for a startup right now and we don’t have a huge excess of development staff.  Our devs have been implementing UI testing in Cypress, but we also need some wide cross-browser testing of our front end Angular apps – we’d already found a couple blocker bugs on Edge and IE largely by accident.  The devs are all busy devving, so I figured I’d take that on. I said, “Well, there’s products where you can just click to record a UI session and replay it in other browsers without writing a bunch of code, let’s try that out.”  Most everyone has a free trial nowadays so I could see which ones were best. Then the pain began.

Sauce Labs – Part 1

I had used Sauce in a previous life, when we had a bunch of Robot Framework/Selenium tests and I liked it.  So I went there first.  Unfortunately, they have no record/replay capability, verified by their support, so I moved on.

But I came back later because I had found that there was a Selenium IDE that’s a record and playback tester that you can integrate with Sauce by using selenium-side-runner.

Selenium IDE is very cool, its killer feature is that as it records it copies various ways to address an item on the screen – css, xpath, full xpath – and when it replays if the first one doesn’t work it tries the next one and if that works tells you “hey you should update this test.” That’s great because UI testing is shitty and unreliable at best, and once you have Angular generating ever-changing ids for elements it is even worse.  The only bad thing is you have to go add assertions in manually afterwards.

Screen Shot 2020-01-09 at 10.58.47 AM

So in fairly short order, I managed to get a reproducible Selenium IDE script that exercised our Angular app and works.  The app’s just like 7 screens of form fill, it’s not crazy.

Well, then I tried to save it as a “.side” project and feed it through Sauce by using selenium-side-runner, which is just:

npm install -g selenium-side-runner

selenium-side-runner –server <sauce-url> -c “browserName=’chrome’ version=’latest’ platform=’macOS 10.14′” ‘Paul Precision.side’

You get that sauce URL that has credentials embedded under User Settings/Driver Creation in their UI.

Unfortunately once I push it to sauce (starting on the same OS/browser, which you go get the tokens for from their Platform Configurator) – problems. The player is great, it shows the video (even live while testing) synced to the step taking place (unfortunately since I’m piping it in, it’s not showing the steps in the test syntax, but in raw Selenium execution syntax).

Screen Shot 2020-01-09 at 12.37.42 PM

I fixed most of them by going and changing selectors away from CSS to xpath, then sitting there iterating with chrome dev tools and the IDE trying new ways to use an item that works in chrome then works in selenium IDE and then… Doesn’t work on Sauce. I have gotten it 90% working but the last 10% is blocking me.

CrossBrowserTesting

Next I tried SmartBear’s CrossBrowserTesting.com. An all-in-browser recorder that worked great!  And then the replays didn’t work.  I messed with it a while and contacted their support, who said “Oh yeah it doesn’t do angular, it’s for static pages.”  So on to the next one. Who uses static pages, this is 2020?

The interface is nice enough, editable steps next to a running video (though not synced up).

Actually looking at it closer I bet I could do the same “edit all the locators” deal and try to get it to work but… My 7 day trial is over (a week shorter than the other options) so I guess I can’t try.  It didn’t do the nice multi-locator guessing Selenium IDE did but it does seem to have several options in a dropdown while I edit the tests, and the recorder is integrated into the offering so that’s nice – the UI was good overall. Unfortunately the super short trial and the presales support saying “Angular? Go away!” prevented me from really seeing if it can work for us.

Screen Shot 2020-01-09 at 12.38.18 PM.png

GhostInspector

Demoralized, I head to Twitter, and someone recommends GhostInspector.  You record it with a Chrome plugin and then replay in the browser – video, but then it shows the editable steps next to a screenshot showing % change from the last screenshot (the steps aren’t synced to the video, which would be better) .  You can do assertions while you’re recording.  And the replay works the first time and every time – Hallelujah!

Screen Shot 2020-01-09 at 12.54.11 PM

And then I look to set up cross-browser and discover they only support Chrome and Firefox, and to even do that in an automated manner you have to duplicate the entire test suite.  I was so disappointed, it worked perfectly otherwise.

Seriously y’all if you add more browsers I’ll pay you immediately for this.

EndTest

Determined to make this happen I find EndTest and, after verifying they support a full OS/browser matrix, try them. They also use a browser plugin recorder like GhostInspector.

I’ll be honest, the UX is terrible.  Besides the 1990s colored icons, everything is always a click away – you have to watch the replay video separately from looking at the logs from looking at the steps from editing the steps. Everyone, the magic combination is editable steps to the left, running video and logs to the right, highlight the step you’re on as it plays. Anything else harms your usability.  And also while editing steps you can’t add a step in just anywhere, you have to add it at the end of your 100 steps and then drag it up page by page…  And often when you do that you just get “error saving test” messages for no reason. Argh.

Screen Shot 2020-01-09 at 1.08.46 PM

But… The recording is quick and then it is semi working.  Tempting.  Now I start the iterative edit-replay-debug cycle.  It is slow. You get to give your steps a name but those names don’t show up in the test output, because why would they.  After an afternoon of fiddling, I’m halfway through a 7 screen flow. Their support was nicely proactive and reached out to me about an error (I was looking for text with a $ in it and you can’t do that, but you can define a variable and then use that…)

It’s at this point I also find the Selenium IDE and bring Sauce back into the mix.

Keep Trying – Sauce and EndTest

Next, what I was doing was fiddling with the steps in the Selenium IDE, then pumping those changes both into Sauce via CLI and manually editing them into EndTest’s UI, desperately hoping to get one to pass (they don’t act the same under the same inputs for whatever reason).

Locator by locator I grind through making the test work.  I have a lot of trouble where we use multiple option mat-selects, because they “stay open” while you select items and I can’t get them to close.  I try sending ESCAPE keys but can’t get that to work, I try double clicking on other things…  One of our devs figured out the magical thing to click on was the overlay backdrop (css=.cdk-overlay-backdrop) to close the damn multiselect box.

This takes several grueling days.  I ask support folks for help but don’t really get any useful traction.  Finally, I get a magic combination in the Selenium IDE that also works in Sauce!  I try the same ones in EndTest and they don’t work.

Screen Shot 2020-01-10 at 11.15.10 AMScreen Shot 2020-01-10 at 11.15.18 AM

It’s super frustrating.  The same locator doesn’t work in all 3 tools, often forcing me to choose a less portable option – instead of something resilient to change like “xpath=//span[contains(.,’Visual Line of Sight’)]” – which works in some cases – I end up having to use something like like  xpath=//mat-option[@id=’mat-option-87′]/mat-pseudo-checkbox (and sadly in angular material those IDs randomize unpredictably). Like, there will literally be two identical-except-for-the-text-and-ids-in-them widgets one after the another and one kind of locator works on the first one and not on the second. No idea why.

Sauce Labs – Part 2

OK, so of all the options the only one that actually works for me and will allegedly do crossbrowser testing is an unsupported combo of Selenium IDE and Sauce run off the command line.  A couple sources I found over the course of this:

Not optimal, but at this point I’m a week in and taking what I can get.  Let’s try an actual crossbrowser matrix now.  Bonus hacky Bash script:

#!/usr/bin/env bash

tests=("Paul Precision.side")

platforms=("browserName='chrome' version='latest' platform='macOS 10.14'"
        "browserName='chrome' version='latest' platform='Windows 10'"
        "browserName='chrome' version='latest' platform='Linux'"
        "browserName='MicrosoftEdge' version='latest' platform='Windows 10'"
        "browserName='safari' version='latest' platform='macOS 10.14'"
        "browserName='firefox' version='latest' platform='macOS 10.14'"
        "browserName='internet explorer' version='latest' platform='Windows 10'")

for test in "${tests[@]}"
do
        for platform in "${platforms[@]}"
        do
        echo Running "${test}" "${platform}" 
        echo
        selenium-side-runner --server https://<secrets>@ondemand.saucelabs.com:443/wd/hub -c "${platform}" "${test}"
done
done

Chrome on MacOS – works.  Chrome on Windows – works.  Chrome on Linux – for some reason can’t find a selector early on.  Edge on Windows – weird proxy 400 error, won’t even load the page.  Pretty sure that’s not my fault.  Safari on MacOS – can’t click on the first things it needs to click on.  Firefox on MacOS – same error?  Really?  Now IE… Out of minutes (despite the UI telling me .6 automated hours remain).

I have tried all these os/browser combos manually and they work.

So my conclusion is all these suck and I guess I just need to pay manual QA people to click on our app.  Great.  Or for Cypress to get off their butts and add cross-browser support, which they say “is coming” for three years now.

We’re a startup and time is money, so in the end cross-browser testing is not worth the hassle in all these solutions.  But it is important and I’d love someone to make a solution that actually works for it.

P.S. Please do not suggest another solution unless it has a) UI record and replay capability and b) is cross browser (Chrome/Firefox/Safari/IE/Edge on Windows/MacOS/Linux). I know there’s a million browser automation testing tools out there, that’s not what I need.

Update

I put some more time into this and got some working options – see Record and Replay Browser Testing, Take 2!

Leave a comment

Filed under DevOps

Agile Organization Incorporating Various Disciplines

I started thinking about this recently because there was an Agile Austin QA SIG meeting that I sadly couldn’t attend entitled “How does a QA manager fit into an Agile organization?” which wondered about how to fit members of other disciplines (in this case QA) in with agile teams. Over the last couple years I’ve tried this, and seen it tried, in several ways with DevOps, QA, Product Management, and other disciplines, and I thought I’d elaborate on the pros and cons of some of these approaches.

Two Fundamental Discoveries

There are two things I’ve learned from this process that are pretty universal in terms of their truth.

1. Conway’s Law is true. To summarize, it states that a product will tend to reflect the structure of the organization that produces it. The corollary is that if your organization has divisions which are of no practical value to the product’s consumer, you will be creating striations within your product that impact client satisfaction. Hence basic ITSM and Agile doctrines on creating teams around owning a service/product.

2. People want to form teams and stay with them. This should be obvious from basic psychology/sociology, but if you set up an organization that is too flexible it strongly degrades the morale of your workers. In my previous role we conducted frequent engineer satisfaction surveys and the most prominent truth drawn from them is the more frequently people are asked to change roles, reorg, move to different teams, the less happy they are. Even people that want to move around to new challenges frequently are happier if they are moving to those new challenges with a team they’ve had an opportunity to move through Tuckman’s stages of group development with.

I have seen enough real-world quantified proof of both of these assertions that I will treat them as assumptions going forward.

Organizational Options

We tried out all four of these models within the same organization of high performing engineers and thus had a great opportunity to compare their results.

Separate Teams

When we started, we had the traditional model of separate teams which would hand off work to each other.  “Dev,” “Ops,” “QA,” “Product” were all under separate management up through several levels and operated as independent teams; individual affinities with specific products were emergent and simply matters of convenience (e.g. “Oh, he knows a lot about that BI stuff, let him handle that request”) and not a matter of being dedicated to specific product(s).

Embedded Crossfunctional Service Teams

Our first step away from the pure separate team model was to take those separate teams and embed specific members from them into service oriented teams, while still having them report to a manager or director representing their discipline. In some cases, the disciplinary teams would reserve some number of staff for tool development or other cross-cutting concerns. So QA, for example, had several engineers assigned to each product team, even though they were regarded as part of the permanent QA org primarily.

We (very loosely) considered our approach to be decentralized and microservice based; Martin Fowler is doing a good article in installments on Microservice Architecture if you want more on that topic.

Fully Integrated Service Teams

With our operations staff we went one step farther and simply permanently assigned them to product teams and removed the separate layer of management entirely. Dev and “DevOps” engineers reported to the same engineering manager and were a permanent part of a given product team. Any common tooling needed was created by a separate “platform” engineering team which was similarly integrated.

Project Based Organization

Due to the need to surge effort at times, we also had some organizations that were project, not product, based. Engineers would be pulled either from existing teams or entire teams would additionally be pulled into a short term (1, 2, 3 month) effort to try to make significant headway across multiple products, and then dissolve afterwards.

Hmm, this looks like it’s getting big (and I need to do some diagrams).  I’ll break it up into separate articles for each type of org and its pros and cons, and then a conclusion.

12 Comments

Filed under Agile, DevOps

Scrum for Operations: Order from Chaos

Welcome to the second installment in Scrum for Operations, a series where I talk about (and go through) the process of doing systems work as part of a DevOps team according to the Scrum methodology. Last time, I introduced the basics of Scrum as it generally is used, and its key benefit of frequently delivering useful functionality. But I already hear the objections – “How can that turn out all right?” It is so light on process that one’s initial inclination is to dismiss it as “cowboy coding”, and we already know not to be “cowboy sysadmins,” right? One’s intuition might be (and mine was in the beginning, I’ll be honest) that this would lead to a metastable process that could not be sustainable without fundamental fatal flaws overtaking it.

Well, as I learned after trying to learn more and kicking the tires with our dev team here, there are several core disciplines that are agile’s saving graces.

Testing

We ops guys are used to testing being a neglected afterthought in the development process, often tossed over the wall to a QA team that isn’t well integrated into the product. Therefore we have a hard time trusting code that’s being handed over to us given our experience – we get it handed to us and it doesn’t work!

Well, agile pretty much understands that without pervasive testing, this kind of fast cycle process is doomed. At its extreme, some practitioners use Test Driven, aka Test First, development where failing tests must be written first and then code filled in behind it till the test passes. This creates a large inherent test framework.

Even agile groups that don’t do this almost always have metrics on unit test coverage and a required bar devs must hit.  Here, our desktop software group that’s newly using Scrum has the mandate that “there must be XX% unit test code coverage or you’re not ready to ship.”

Similarly, acceptance testing (automated continuous testing of user stories vs the code) is a common part of agile. Continuous ongoing testing ensures quality through the dev cycle and reduces the need for time-intensive, and mysteriously always insufficient, big-cycle regression testing.

This is a great culture. And there’s all kinds of different tests – unit test, integration/functional/regression testing, performance testing, fault testing… Starting to get interesting to you?  How about monitoring? In reality application monitoring is a special case of testing – it’s a “lightweight integration regression test.” Our initial approach to DevOps includes making test coverage goals for things like monitors and performance testing, because that plugs into the existing agile mindset well.

Bonus new terminology thing – the quick acceptance test you do upon release, which we always called “critical path testing,” is now being called “smoke testing” by the hip. Update your dictionaries!

A side note on formal QA groups. Just as we are working on DevOps, there has been previous work on how QA teams interact with agile dev teams, and there are a variety of different doctrines on how to split the work – often, it’s devs that are responsible for a lot of the testing. It’s a hard balance – you want the devs to be responsible for some of the testing because the best testing is “close to the code,” but just like with Ops, a real QA team has expertise beyond what a developer can just bolt in with 10% of their attention. Here, we have a dev team and also a remote QA team; devs test their own code on the daily build and then there’s a weekly push to a more stable environment where the QA team does acceptance testing and is moving into performance testing and the like.

Anyway, this endemic focus on testing and automation of testing and testing metrics is the pin that makes this agile flywheel actually turn without just flying off. (You are correct, some agile teams don’t do this – we call those “the unsuccessful ones.”)

And this is for you to do as well! There’s a whole post or series of posts in the topic “What does a unit test mean for something infrastructurey” – it is incumbent on you to figure it out and also have high test coverage with your work.

Refactoring

In general, agile dev is the epitome of horizon planning. You know you can’t get all the requirements ahead of time (or if you do get them all ahead of time, what you come out with won’t serve any real human’s need) and similarly preplanned architecture and design often doesn’t survive contact with the scrum. So it’s not “don’t plan or design,” but it’s “plan and design in an ongoing manner.”

This is one of the scariest parts for an ops person – we assume that we get one “bite at the apple”, and once we’ve set up the systems and let in the developers, we’ll never be allowed to change anything without a fight.  But developers have this problem internally all the time – one dev is working on a core library or API that other developers are using, and they don’t wait for core guy to get done before they start. Instead, they have adopted a concept they call refactoring. Refactoring just means that each sprint, you are open to redoing fundamental stuff that needs to change (or that you realize you did kinda ghetto in the first place).

Because this is an accepted part of the iterative approach, you get to leverage this as well.First iteration they get the basic Tomcat and mySQL install out of the repo, and they can get started – and then in the second iteration you front it with Apache, or tune the DB for security, or whatnot and they have to make some changes to fit. I’m not promising no one will ever cry about this, but it’s a part of what makes the culture successful so it’s there for you to leverage.

And for you to adhere to! Be open to refactoring your infrastructure based on the emerging project needs.

Source Control

A developer might not even mention this, and most books on agile don’t, because to them it’s so fundamental a discipline that it’s like breathing air. Sadly the same can not be said of Ops folks, so I’m mentioning it. When code is changed, it is in a shared source control repository – which gives other people on the group visibility into it (a collaboration touchpoint), is a common place to source it from (a deployment touchpoint), and can be used to easily manage multiple versions, even experimental ones, and merge or roll back changes.

This is the most fundamental empowering technology of modern software development (not just agile) and you must uptake it immediately or you have lost.  Fair warning. It is the stepping stone that will allow subsequent Cool DevOps Automation to happen.

Conclusion

These three disciplines convert agile/Scrum from dangerous free-for-all to a new technique that gets your product done both more quickly and with higher quality than a waterfall method. I’ll talk further next time about how Ops slots into all this, and how you can fit your systems admin work into a Scrum mindset.

Also note, there’s some other agile disciplines surrounding agile design and encapsulation and patterns and whatnot, which I don’t understand well enough yet to speak authoritatively on. Feel free and chime in with other core disciplines if you are!

Leave a comment

Filed under DevOps

Quora vs StackExchange

As a Web ops guy, I’ve used the Stack Exchange sites, especially Server Fault, a lot.  They’re the best place to go do technical Q&A without having to go immerse yourself in a specific piece of open source software and determine what bizarre way you’re supposed to get support on it (often a crufty forum, mailing list with bizarre culture, or an IRC channel).

However, they have started to, in my opinion, come apart at the seams. They started with the “holy trinity” of Stack Overflow (coders), Server Fault (admins), and Super User (users).  But lately they have expanded their scope to sites for non-techie areas, but have also started to fragment the technical areas.  So now if I have a question, I am confronted with separate communities for Server Fault, Linux & UNIX, Ubuntu, and more. Or even worse, Stack Exchange vs. “Programmers” vs language specific lists. This basically heavily segments the population and leads to the same problems that the weird little insular mailing lists have. It makes me use the SEs a lot less. I don’t want to have to somehow engage with 10 different communities to answer my everyday questions (and I sure as hell am not going to follow 10 to answer questions), so in my opinion they are cannibalizing their success and it will implode of its own weight and become no better than any Internet forum site.

Recently I started seeing tweets about Quora from @scobleizer, and it said stuff like “is it the future of blogging?” and was being pitched as some twitter-blog hybrid which of course caused me to ignore it.  But then it started getting a lot more activity and I thought I’d go check it out.  But if you go to the Quora page, you can’t see anything without logging in.  And of course if you log in with Twitter or Facebook it wants “everything from you and your friends, ever.”  So I wandered off again.

Finally I gave in and went over and logged in, and it’s actually pretty neat – it’s Q&A, like Stack Exchange, but instead of segmentation into different sub-communities, it uses the typical tag/follow/etc. Web 2.0 paradigm. So “Stack Exchange plus Twitter” is probably the best analogy. Now on the one hand that more unmanaged approach runs the risk of becoming like “Yahoo! Answers” – utter crap, full of unanswered questions and spammers and psychos – but on the other hand, I like my topics not being pushed down into little boxes where you can’t get an answer without mastering the arcane rules of that community (like the hateful Cygwin mailing list, where the majority of new posters are chased off with bizarre acronyms telling them they are using email wrong).  The simple addition of up/down voting is 80% of the value of what SE gives over forums, so will that carry the day?

Now maybe it’s because they’re having capacity problems, but the biggest problem with Quora IMO is that you don’t get to see any of it when you go there until you log in and give them access to all your networks and whatnot, which I find obnoxious. But if they fix that, then I think given the harmful direction SE is going, it may be the next big answer for Q&A.

2 Comments

Filed under General