Showing posts with label software_engineering. Show all posts
Showing posts with label software_engineering. Show all posts

Wednesday, February 20, 2013

Node.js meet IBM PureApplication System – Part 3 of 3


This is a re-post from the on on the Expert integrated systems blog.  Please go there for discussions and feedback.
In this final of three posts about our Node.js plug-in for IBM PureApplication System I will cover testing and debugging plug-ins and deployments.  Additionally, I briefly discuss some advanced features that could be useful moving forward and point to some references as well as listing some tips and tricks using IBM PureApplication System and the Node.js platform.
Recap: Part 1 and 2
In part 1 of this series I discussed the plug-in model for IBM PureApplication System along with details on how to create one.  Since the series is about demonstrating how open this platform is, I also picked and discussed the hot web application framework Node.js.  After a short background, I discussed how to design a plug-in for Node.js that could support simple patterns deploying Node.js applications from a Git repository such as Github.com.
The subsequent post (part 2) covered how to create the plug-in designed in part 1.  Some of the details covered in part 2 involved deep diving into how to create, build and install a new plug-in.  It also included discussions of the metadata.json and config.json files as well as the scripts to configure, install and start the Node.js server.  In that same post I also showed how to install the plug-in and use it in a simple pattern on an IBM PureApplication System environment.
In this post I will cover what you can consider as pitfall avoidance and tips and tricks, as well as pointers for advanced features.  In particular I also have a brief discussion of where to go from here.
Testing and debugging plug-ins
Testing any plug-in amounts to deploying it into a running PureApplication System setup, creating a pattern with the plug-in components and using that pattern.  However, such a test cycle can be time consuming and error prone.  It is therefore recommended that you individually test the different parts of the plug-in before doing the eventual complete tests.  I give three primary approaches to testing your plug-ins and debugging common issues that may arise:
1. Build / Deploy / Instantiate / Test
In this scenario, you build your plug-in according to the Plug-in Development Kit (PDK), and deploy it using the IBM PureApplication System dashboard (see part 2 for details).  Create a plug-in, and test it by deploying an instance. While this approach should always be included in your test plan, it is usually the most error prone, and it can be long.  Any issues with the plug-in will be known at a later phase and thus requires you to restart.
2. Build / Test / Deploy /Instantiate
In this approach we move the test phase earlier—before even deploying your plug-in.  This requires you to create tests for the various parts of your applications.  Using testing frameworks like PyUnit you create unit tests for the various Python files in your plug-in and make sure that while they can be built into a plug-in, they are also passing your tests.  You might need to isolate your code from the PDK files or stub or mock any dependencies.
3. Test / Build /Deploy / Instantiate
In this final approach we move the tests even earlier.  The idea is to create your Python scripts even before packaging them into a plug-in and testing (through PyUnit, for instance).  Once your configure.py, install.py, and start.py files work fine, you can then retrofit them to follow the PDK format.  In this case you are using a set of Python scripts that you are sure are able to configure and install your component, prior to even packaging them in a plug-in.
In any of these strategies, I want to highlight two common pitfalls that occur with plug-in developers.  First, once you build your plug-in, is verifying the IBM PureApplication System dashboard correctly shows and list your plug-in.  Most added plug-ins are shown under the “Other Components” in the Virtual Application Builder tool.  There you should see an icon matching the image you used in your metadata.json.
Second, if after successful installation of your plug-in you do not see your component then it’s likely that your IBM PureApplication System does not have the pattern type for your plug-in enabled.  Please refer to part 2 of this series on the steps to follow to enable your plug-in.
As one can easily notice, the main difference between the three approaches is when test is introduced.  Since I am of the school of thought that testing early and frequently is usually a great idea and a worthwhile investment, my primary recommendation is to move your tests as early as possible.  Finally, it’s worth noting that most pitfalls and debugging approaches discussed here are generally applicable to any approach used.
Node.js tips and tricks
In this section I give a quick primer with pointers on how to get started with Node.js.  This is not intended to be a complete overview of the subject but rather a set of reference links that I have found useful as I myself got started in Node.js and in creating this three-part blog post series.
1. Installing Node.js
The Installation wiki page for Node.js on Github.com contains what is the official set of instructions for getting Node.js running on your platform.  Most of the instructions assume that you have root access to the system you are using and that you use one of the following popular operating systems: Mac OS X, Windows 7, and various flavors of Linux.
Current Node.js releases depend on Google’s V8 engine.  So, one aspect of the installation for Node.js is getting the V8 engine installed onto your machine.  This might involve building it.  Generally, this step works fine for most operating systems.  However, you need to make sure you have correct Python interpreter as well as correct C/C++ compilers and libraries.
For the Mac OS X these come with installing the latest Xcode development package, for Windows the MS Visual Studio should contain the correct dependencies, and for Linux the latest GNU C/C++ compiler and libraries should suffice.
2.Adding packages via NPM
Like most modern software frameworks, Node.js’ architecture is modularized.  That is, while the basic Node.js installation comes with full features, it also lacks various components which are then added (as needed) after the fact.  This allows your Node.js installation to be easily customized and extended.  For instance, if you want to use the language CoffeeScript (a JavaScript-compatible language) then you simply add the coffee package.
To install and manage these modules, Node.js uses the Node Package Modules (NPM).  NPM is a standalone package manager that usually needs to be installed separately from Node.js itself.  However, once installed, you can use NPM to easily add new modules (or packages) to a Node.js installation as well as updating existing packages.  Even Node.js itself can be installed and updated using NPM!  Finally, the NPM web site (https://npmjs.org/) also constitutes a repository of various OSS modules you can readily access, as of this writing more than 18,744 modules were available.
3.Troubleshooting and debugging
I have found three areas that cause issues when getting started with Node.js.  First, the installation process, while usually flawless, can be painful for some users.  Primarily this has to do with not having the correct development environment when building Node from sources and setting up the V8 JavaScript engine.  Carefully following theInstallation wiki page for the operating system you are targeting is your best solution.
Second, while installing Node.js modules through NPM is as easy as issuing the command: $npm install coffee it has a couple of pitfalls.  First, NPM allows a user to have multiple module package directories (where the modules are installed) as well as a global one.  To install modules in the global directory, you must use the -g option when installing.  Also, since the global module directory usually defaults to: /usr/local/lib/node_modules in most UNIX compatible system, accessing this directory will require root privileges.  So all installation command must be done with that access:  $sudo npm -g install coffee.  It’s also recommended that the module directory be exported from the shell where the application will be executed, this is achieved with: $export NODE_PATH=/usr/local/lib/node:/usr/local/lib/node_modules
Finally, when running Node.js applications it’s usually important to run $sudo npm -g install in the application directory.  The application’s package.json file is then used to determine the dependencies and what packages need to be updated and/or installed.   The application can then be ran using the $node command.
Plug-in advanced features
plug in advanced
Creating or re-using an existing cloud component plug-in is the first step to creating patterns for IBM PureApplication System.  While, as we demonstrated, you can use a cloud component to create a simple pattern, anything more complicated requires other cloud components as well as linking these components together and adding quality of service (QoS or policy) features to the current components.
While a thorough discussion of any of these features (linking, QoS) would warrant its own post, I want to highlight each here as well as monitoring and scaling.  The goal is to give hints at what is possible with IBM PureApplication System and in some cases point to where more information can be found.  Also, the paramount goal is to re-iterate the fact that the IBM PureApplication System plug-in model is flexible, open, and enables modeling and deployments of complex workloads.
Adding links
workload services
The most common advanced feature for creating comprehensive patterns is to create links between components.  For instance, one might want to create a pattern for Node.js applications and relational databases such as DB2,MySQL, or Postgres or a graph database like Neo4j.  For each of these cases, you would need to create a link plug-in that specifies the Node.js plug-in as its source and the datastore plug-in as its target (this assumes that you reuse or create a plug-in for the target datastore).
Link plug-ins are created like component plug-ins, they have a similar structure, however, they use “link” as their type and have additional parameters in their metadata.json such as “source” and “target”.  Like a component plug-in, a link plug-in can also include attributes.  For instance, connecting to a database you may want to include the database name as a link plug-in attribute as well as its JDBC JNDI URL or other types of URL references.  The link plug-in uses the attributes’ values (specified by the user) to configure the source and target components correctly.  Visually, this is represented by the screenshot above showing how the WebSphere component connects to the DB2 component in the J2EE pattern that ships with the IBM PureApplication System.
 QoS policies
Another important class of advanced features that could complement the Node.js                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 plug-in created in this series of posts is to modify it to support QoS policies.  An example of using QoS policies for a cloud component is illustrated in the figure below for the WebSphere Application Server (WAS).
scale 1
The QoS policies for WAS are extensive.  In the screenshot above we show one aspect which allows the user to specify various options for the Java virtual machine that WAS will run on.  For instance, using the JVM policy one can specify the initial size of the heap as well as its maximum value.
For the Node.js plug-in an obvious set of QoS policy would be around how to fine tune Node.js and the V8
JavaScript engine.  For instance, specifying the max V8 stack size or passing options directly to the V8 JavaScript engine to fine-tune its performance.
Monitoring and scaling
The final set of features that would be needed for our Node.js cloud component to address most of the needs of modern production-level workloads is monitoring and scaling.  Most cloud environments suffer from failures.  These failures, while infrequent, are inevitable.  As such, any production-level pattern needs a means to recover from some failures and therefore some level of monitoring capability to know when failures occur or are about to occur.
The IBM PureApplication System comes with monitoring features than can be added to any plug-in.  The monitoring is done in the form of a plug-in that can be required in your own plug-in.   Without going into too much detail, some of its primary features are to help monitor processes on virtual machines (VMs) and trigger actions and aggregate the results into the IBM PureApplication System dashboard.  Additionally, the IBM PureApplication System comes built with some basic monitoring features readily available from the dashboard.  Some of these features are viewing current VM status as well remotely accessing logs for any VM and component.
A feature that works side-by-side to monitoring is scaling.  One of the advantages of using a cloud environment for your workload is the ability to quickly scale (up and down) the workload to address the immediate demand.  While scaling is not an easy feature and can be tricky and be workload-specific, some general scaling strategies such as replicating services and using a load balancer service to spread the load across a pool of services is a tried and true way to scale.
Adding scaling to a plug-in is another advance topic that can be achieved by adding a QoS policy to capture the scaling requirements of the user and then pass that information to the plug-in and use it to customize your startup scripts.  One example of using a scaling policy for a cloud component is how the WAS plug-in QoS scaling policy allows users to specify scaling rules to scale up and down based on the current HTTP request volume.
Wish list
While creating a new plug-in is relatively straightforward when you have some examples, as we discussed, there exists the possibility of various pitfalls during the development.  Simplifying and streamlining the plug-in development process is needed.  In particular the following list of potential improvements could help:
1. A plug-in generator.  This is a tool that would take some basic input like name, attributes list, image, and so on, and generate the scaffolding for a working plug-in that, of course, would have scripts that do not do anything.  The point is to get the user moving fast and have a working plug-in that they can modify and iterate multiple times after.
2. A plug-in simulator that could simulate the lifecycle of a plug-in during deployment.  The goal of this simulator is to reduce the time it takes to test the deployment of plug-ins during development.  Instead of using a real IBM PureApplication System and installing and testing a plug-in, developers would use the simulator to execute these steps in a simulated fashion in seconds versus minutes.
4. A plug-in mock testing environment. Along with the simulator, we also need a mock testing environment mimicking the real environment where plug-in lives.  Ideally, this could be extended to support full pattern development with mocked VM resources.
5. A developer plug-in catalog and browser so that all plug-ins created by a developer can be collected in one place.  The various versions of each plug-in can also be displayed along with notes for each plug-in.  This type of browser would facilitate long term development of plug-ins along with their maintenance.
Since we are always looking to improve the process of using developing and using our IBM PureApplication System environment, this list of wishful items has been communicated to the research and development teams.  Future versions of the PDK and the IBM PureApplication System platform might include all or some of these features.  If you have an opinion on these or have other development tools that you would like to see then please comment on this post.
What next?
In this three part series we did a deep dive investigation on how to support the Node.js web application stack in the IBM PureApplication Systems environment.  The investigation was thorough and took us from design to implementation and test of the plug-in.  Additionally, we discussed advanced features for expanding the current plug-in to support quality of service as well as features like scaling and monitoring.
I hope you have enjoyed this series and that you now have enough information to start building your own set of plug-ins as well as consider contributing them to our IBM PureSystems Marketplace.

Saturday, June 27, 2009

Social software and media, the virtues of agile development, and the new Iranian revolution


This is a repost of a blog entry created for the OOPSLA 2009 official blog.

The Web is now social. Increasingly, people young and old are spending a substantial part of their social lives by using the Web. Social networking sites like Facebook, MySpace, YouTube, Twitter, and many others, are giving people the ability to maintain various social interactions and connections with friends, family, and acquaintances. Indeed using Facebook one often hears of how people are reconnecting with old friends and making new ones. Twitter has quickly transformed into one of the best means for sharing real-time information on various topics all across the Web. YouTube’s videos can transform the fortune of unknown talented individuals from the most remote part of the world as well as shed light on issues and facts that otherwise would have been completely ignored.


As the world bears witness to the recent unrest in Iran, the power of social media has never been clearer and more manifest. Twitter, Facebook, and Youtube are giving a voice, a face, and a communication channel to the people on the streets of Tehran. All this comes despite the efforts of the Iranian regime to shut down media and reporters across the country. While it is uncertain how this new revolution in Iran will end up and as the world continue to watch intensely, there is one undeniable truth, and that is the unintended impact of social media. As Clay Shirky puts it, social media has enabled the democratization and amplification of the voices of the people...

But what does social media or social software have to do with OOPSLA? Surely they are software systems but what else does the conference bring to help this new wave of Web software.
It is true that social software is about connecting and empowering people and thus appears to be a purely social phenomenon that simply is enabled and constructed using basic Web software principles and approaches... There is, however, an important and subtle aspect that is hard to see on the surface and that has its roots in OOPSLA. In a nutshell it is about agility... Looking deeper at the various social media sites aforementioned, one other thing becomes ample clear. Many of the usages and social consequences of these sites were mostly, if not completely, unintended.

Most of the sites started with initial ideas of connecting people but ended up with emerging usage patterns that are truly powerful and consequential. The creators of Twitter did not set out to create the new voice for modern democratic revolutions---that fact has emerged accidentally. So, the question becomes: how does one create software that can have the type of profound social impacts and successes such as Twitter? There is no “cookie cutter” solution, however, one well known approach (one that was used at Twitter) is to use software frameworks and principles that are in line with the principles of agile software development.

At a recent talk at IBM’s Almaden Research Center, Twitter’s CEO Jack Dorsey was asked by Almaden Ph.D. intern Ajith Ranabahu, if in the light of recent scaling issues that Twitter has experienced, would Dorsey and the original Twitter founders have chosen another language or another framework to create their company. While Dorsey acknowledged some of the limitations of the Ruby on Rails platform they are using, he was quick to say that he would still have made the same framework choice given another chance... Dorsey's reasoning is simply that the key factor is not one of scaling and architecture, but rather of agility and speed of development.

Rails is well known for providing both development speed and agility in bundles. By being able to materialize their ideas quickly, Dorsey and the other Twitter co-founders created a working version of their site in a few weeks which let them observe the initial users and also let them continuously iterate and find the subsequent micro and now macro successes. With each group of users, new patterns emerged and Dorsey and team could quickly iterate, adjust, and malleably modify their software to match the emerging usages and patterns. Without the agile virtues of Ruby on Rails, it is doubtful Twitter would be the phenomenon that it has become today.

As maybe the preeminent incubator of agile software development thinking during the last decade and a half, and the place where agile has gone mainstream, the OOPSLA conference it seems, has been a key enabler for the Agile movement that has inspired frameworks like Rails and indirectly sites such as Twitter. The agile practices of test-driven development, pair programming, continuous integration, and the SCRUM team organization approach, have roots either directly or indirectly at OOPSLA, or have progenitors that frequently attend and participate at the conference. And if we go even further, the virtues of rapid, instant, prototyping and having “the customer as the driver”, like Kent Beck likes to say, now may be taking their natural course in social media and crowd sourcing of content.

And while social software is undoubtedly a phenomenal success, there remains some serious challenges, and this also is where the OOPSLA community could help further. First, there are the programming challenges. The amplification attributes of Twitter and Facebook occur because the Web is now programmable. Using APIs and simple scripts, it is easy to create aggregation points, as well as new data sinks and sources for information flowing through the social media channels. This is how a tweet from the streets of Tehran can flow, be retweeted, and end up, almost instantaneously, on the television screens of millions in the United States and Europe. The challenge is making it quick and easy for anyone to collect, filter, and aggregate social media information.

Second, and perhaps most importantly, are the challenges around the provenance of the information that is flowing through the social media channels. Now that everyone has a voice, it becomes increasingly difficult to discern authentic voices from those trying to manipulate the system. Here, research in data provenance, data mining, and data filtering for the massive amount of realtime and streaming data is key. Realtime and stream programming pose significant and fundamental challenges that beg for systems, frameworks, and programming language help.

Finally, as in all computing for open systems (such as the Web), the concerns of privacy and security remain paramount. It is now well accepted that addressing these persistent issues cannot be done after the fact, but are aspects that must be addressed at the early stages of development. There is a clear need to share best practices and uncover patterns to help overcome these challenges...

So while social media and social software are helping transform the fabric of social interactions from the hills of Silicon Valley to the bars of Austin, to the cafés of Paris, and to the streets of Tehran, remember that many of these social consequences were not planned, but rather emerged from the resulting empowering software that is itself possible due to the virtues of agile software development and practices... And together with the community that produced and helped agile practices go mainstream, we can help address some of the important remaining social media challenges so that the new voice of the people can persist, remain strong, and authentic.


References
Go here to watch Dorsey's talk at Almaden and Ajith's question in toward the end.

Updates
1. Initial post on 06/27/2009

Sunday, May 31, 2009

Why OOPSLA matters?


This is a repost of a blog entry I created for OOPSLA 2009 official blog

These days, we all take for granted that software is best built incrementally, that testing while coding leads to better quality software, that virtual machine-based languages can be as fast as natively-compiled languages, that patterns are great way to bootstrap your thinking when designing, and that an object-oriented language with single inheritance is likely easier to deal with than one with multiple inheritance...

Many of these well-accepted tenets in the software industry and programming trade have their roots in one conference. A conference that started with a band of early programmers who were passionate about a powerful new style of programming: object-oriented programming. That style has evolved over the years to become a source of innovation for all things programming and software. Indeed, most of the assertions above can be traced back to their origins in papers, workshops, or ideas stemmed from that conference: OOPSLA. Such is the legacy of this conference.


Times and technologies change. That fact has implied that every year OOPSLA had to introspect and look for ways to rejuvenate and encourage exploring boundaries of software. The inventor of Self, Dave Ungar, likes to state it simply as always "question your assumptions."

What are OOPSLA’s basic assumptions? Well, over the years it has been a conference about software languages, software development, software development methodologies, and software systems. Should this still be our focus? Software is embedded everywhere and the success of devices like the iPhone and Blackberry are good indication that at least one immediate future of software is in mobile solutions that include a combination of hardware and software within an ecosystem (private or public).

The Web has also transformed our social lives and is increasingly a communication fabric unparalleled in scope, reach, and immediacy. Web services like Twitter and Facebook have transformed the Web into a real-time virtual social square. Information is flowing quickly and at ever-increasing volumes. This social software is not only near real-time and location-aware but it is also interconnected with complex executable logic. Mashups of Web APIs and data have led to a boom of innovations analogous to early days of commerce on the Web.

The current Web not only has resulted in the democratization of information and applications, but increasingly it is the gateway to reaching every business’s data centers and application centers. Using Web APIs, a startup can run its entire operation virtually on cloud computing infrastructure without concern for acquiring sufficient compute resources to scale should that startup become the next overnight success---that is, if they are TechCruched, Digged, or Slashdotted.

With so much happening around software and the Web, why should someone from academia or industry still attend an object-oriented conference?

This is an important question. It is one that cannot be completely answered in one blog post. However, I will give you a short answer now and elaborate each point over the next three months in various blog posts and podcasts. I hope to convince you that OOPLSA matters. It matters to both academic and industrial participants. It matters because of its tutorials, its workshops, its keynotes, all of its leading-edge content. Most of all, it matters because of the world-class people who regularly present their new ideas at OOPSLA. As you will see when we announce the program, all of the hot topics mentioned above will be represented in some fashion in this year’s program...


What makes any conference really worth while is the quality of the people who attend. OOPSLA has a tradition of attracting the best and most innovative students, professors, consultants, industry researchers, and practitioners. This year will be no different. Come to OOPSLA 2009 and you are sure to meet with members of the gang of four, the instigators of the Agile movement, the creators of the Web’s hottest languages and frameworks, as well as hear from researchers and practitioners at the leading universities and companies.

Yes, "the times they are a changing”. But just as Bob Dylan will forever have a certain “je ne sais quoi” that makes his music pertinent, classic, and always filled with relevant content and meaning. So too will the OOPSLA conference. As long as we keep welcoming a core group of innovators, keep including new topics in tutorials, workshops, keynotes, and keep attracting the quality content that you will hear when we announce the program, the conference’s future is very much assured and alive.

Check back frequently for other posts as we peel away at this year’s program and demonstrate why OOPSLA matters to you.

Updates
06/01/09 - added link to OOPSLA 2009 blog entry

Monday, August 11, 2008

OOPSLA 2008 tutorial program

OOPSLA 2008 logoI wrote this post originally for the OOPLSA 2008 blog. However, to help reach a broad audience, I am taking the liberty to replicate it here. After all, I'll be giving my Web APIs on Rails: using Ruby on Rails to create Web APIs and Services Mashups tutorials at OOPSLA 2008 on Wednesday October 22nd. Sign up if you want a hands on, fast track, and fun overview of Ruby, Rails, Web APIs, and mashups.





While OOPSLA is well known for advancing the fields of computer science, software engineering, and their practical applications---after all, Java, Eclipse, Aspect-Oriented Programming, Design Patterns, Agile Methods, just to name a few, got their start at past OOPSLAs---another important aspect of the conference is the wide-range of tutorials available to novices and experts alike.

Continuing this tradition, OOPSLA 2008 boasts a tutorial program unlike any I have seen. There are more than 40 tutorials covering advance, new, novel, and mainstream topics such as Agile, Test-Driven Development, Domain-Specific Languages/Modeling, Microsoft's F#, advanced C++, Google's Guice, SOA, Apple's iPhone SDK, and Ruby on Rails, to name a few.

Tutorials are offered daily throughout the conference and are given many times by the creators of the technologies in question or by worldwide experts in the fields.

Don't miss learning some new skills, sharpening old ones, or get a head start on tomorrow's next big thing. Register today for OOPLSA and add a tutorial or two to your OOPSLA conference experience.

Thursday, July 12, 2007

On the nature of work... or what's the relationship between Mozart, Parkinson, and agility?

Does more time lead to better work?
"I need an extension to submit my paper" Jane asks. "OK, but I can only give you one more day" Peter replies. "Well, if I had one more week, the quality of the paper will be significantly better..."
How many times have you heard similar conversations or have been part (on either side of the fence) in some similar debate. Not just for writing but any types of creative works. In today's continuous work environment and Web time, does more time lead to better work?

A positive answer to this question would seem to imply simply that one would spend more time on the tasks at hand, revising, researching, and improving it, now that one has more time... interesting. In Software Engineering there is a principle called (interestingly enough) Parkinson's law or principle that says (paraphrase) that total work tends to grow to fill the time allotted. This phenomenon, known for decades, and observed in many contexts and certainly in software, helps explain a lot of why software tends to be late, along with other human tasks... The rational for why the Parkinson's law applies so broadly has to do with human nature and also to the dynamics of human tasks and their non-linearlity. In particular the non-linearity of human inspirations (important). I believe that this principle is universal to any type of creative works.

Now this may not be your case. However, for me, any deadline extensions means that I end up pushing some of the work later and address other top priority items now. Naturally, sometimes it could also mean that I spend more time on a given item and improve it's quality. However, in almost 90% of my tasks I am starting to be like (unfair comparison though more like an aspiration) Amadeus and deliver my tasks (papers, code, presentation, and so on) without much revisions---basically as they came to me.

Again, not to compare one with the genius of Mozart, aspiring to be like him though maybe a cool thing. We are all stretched thin and under pressure, so learning how to get things done the first time and also done well is a skill that can pay really big, though clearly also incurs a certain amount of risks that could cost dearly.

The agile software development movement solve this problem in a similar fashion while reducing the risks with delivering tasks without revisions and by reducing wastes. Essentially, in agile methodologies, iterations are an integral part of all tasks while early-and-often delivery is also a primary activity. Agile teams deliver their tasks quickly and revise them often while focusing on what is more important at the time by allowing the stakeholders to drive (i.e., direct the priorities) which help reduce efforts that are not important or potentially wasteful.

Can this approach be successfully applied to other human tasks? Is this a good thing? Are there alternatives that also allow one to remain competitive?