It's been a month now, but I realized that I've never posted a wrap-up post of JUC 2013. So in the spirit of "better later than, never", here it goes.
First of all, I wanted to thank everyone who came. More than 400 of you came, and another 600 signed up for live streaming of events (and I know some people watched those live streams past midnight in their local time zone!). I did my part in signing bobble heads and answering questions, and I was able to finally put faces to some of the people who I actively interact in the community but never met before.
All the slides are video recordings are available online if you couldn't join us.
Alyssa said she's got a lot of feedbacks from folks, and she's already planning for the next year — if you are interested in sharing your thoughts on how to better do this next year, we've put it up for the next week's Jenkins project meeting agenda to talk about how to do it.
Finally, everything in the San Francisco Bay Area is incredibly costly, and events like this was really only made possible by generous sponsors, and we really want to make them happy so that they can help us make this event happen next year as well. So I thought the least I can do is to give them a spotlight and talk about who they are and what they do:
form you wish to use it - on-premise or in the cloud. Check out the many
resources available on the CloudBees website for Jenkins fans - whether
you use open source Jenkins, Jenkins Enterprise by CloudBees or Jenkins in the cloud. Gold Sponsors Appvance delivers technology and services to prove and improve
performance, security and scalability of websites, apps and mobile apps.
The largest brands in the world choose Appvance, from Pepsi to Best Buy
to Bell Alliant. Learn more.
Have questions on SDLC tools or agile process (especially Jenkins
Enterprise, CI or CD)? Leverage our 25 years of expertise for assistance
with CloudBees, Xebia Labs, Sonatype, JFrog, Atlassian, SVN, Git,
Rational, Microsoft TFS and many more. Visit www.BDS.com to learn more..
digital video advertising for the world’s largest brands. Jenkins has
become a core piece of our productivity tech stack here at BrightRoll,
and its importance is increasing. During the time that we've used it
we've seen a huge benefit to participating in the Jenkins community,
getting support from core contributors and plugin authors, and we try to
contribute back whenever we can. www.brightroll.com
The Jenkins User Conference is the only place you can actually feel the
Jenkins community and understand that being part of it is not just a
commitment, it is a privilege we are honored to share. Learn more about
JFrog, our Artifactory Binary Repository solution, and our new Bintray social platform for sharing, publishing and managing binaries.
LMIT Software is now GerritForge, the leader in Agile coaching and
Development Management. We are active contributors of Jenkins (see http://jenkins-ci.mobi) and Gerrit Code Review and we can enable their adoption and integration into the Enterprise Continuous Delivery chain.
provides end-to-end, real time visibility into the operations of network
connected applications wherever they run – across browsers, mobile
devices and servers. Sign up for a FREE account at
newrelic.com/cloudbees. With CloudBees DEV@cloud (Jenkins in the cloud) or Jenkins Enterprise by CloudBees, you can instantly connect to XebiaLabs Deployit (a fully automated deployment solution) and immediately begin reaping
the benefits of delivering continuously. Missed Andrew Phillips' JUC presentation, Preparing for Enterprise Continuous Delivery: 5 Critical Steps? View the slides here.
As reported in various places, there was an incident in early November where commits in our Git repositories have become misplaced temporarily by accident. By the mid next week we were able to resurrect all the commits and things are back to normal now.
As there are many confusions and misunderstandings in people’s commentary, we wrote this post to clarify what exactly happened and what we are doing to prevent this.Timeline
In the early morning of Nov 10th 2013, one of the 680 Jenkins developers had mistakenly launched Gerrit with a partially misconfigured Gerrit replication plugin, while pointing Gerrit to his local directory that contains 186 Git repositories cloned from the Github Jenkins organization. These repositories were checked out about 2 months ago and weren’t kept up to date. Gerrit replication plugin had then tried to “replicate” his local repositories back to GitHub, which it considers mirrors, by doing the equivalent of “git push --force” instead of regular push. Unfortunately, Gerrit replication plugin defaults to a forced push, which is the opposite of what Git normally does. The replication also happens automatically, which is why this mistake has impacted so many repositories in such a short time.
As a result, these repositories have their branch heads rewinded to point to older commits, and in effect the newer commits were misplaced after the bad git-push.
When we say commits were "misplaced", this is an interesting limbo state that's worth an explanation for people who don’t use Git. A Git commit is identified by its SHA1 hash, and these objects will never get overwritten. So the misplaced commits are actually very much on the server intact. What was gone was the pointer that associates a human-readable branch name (such as "rc") to the latest commit on the branch.
By Nov 10th 12:54pm GMT, multiple developers had noticed this, and within several hours, we figured out what happened. From Gerrit log files and with the help of GitHub technical support, he was able to figure out all the affected repositories, and later an independent script was written to verify the accuracy of this list.
Some of the Jenkins developers were closely following this development, and were able to restore branches to point to correct commits by simply pushing their up-to-date local workspaces back into the official repositories. Git makes it very easy to do this, and most of the popular plugins affected were restored in this manner within 24 hours.
At the same time, we needed to systematically restore all the affected repositories, to make sure that we have not lost anything. For this, we contacted GitHub and asked for their help, and they were able to mostly restore branch heads to their correct positions. We have also independently developed a script to find out exactly what commits branch heads should be pointing to, based on the GitHub events API that exposes the activities to Git repositories. This script found a dozen or so branches that fell through the cracks of GitHub support, and we have manually restored those.Mitigation in the future
The level of support we got from GitHub and our ability to independently verify lost commits and subsequently restore them made us feel good about GitHub, and we have gained confidence in our ability to recover from future incidents.
That said, what happened was a serious disruption, and it’s clear we’d better prepare ourselves both to reduce the chance of accidents like this and increase the ability to recover. To that end, we hope GitHub would expose a configuration option to disable forced ref updates. They already do this on GitHub Enterprise after all. Dariusz pointed out that CollabNet takes this one step further and offers ability to track deleted branches, tags, and forced updates. Something like that would have made the recovery a lot easier.
We are going to make two improvements to our process so that we can recover from this kind of problems more easily in the future.
Firstly, we’ll develop a script that continuously records the ref update events across the GitHub Jenkins organization. This will accurately track which branch/tag is created/updated/deleted by who. In case of an incident like this one, we can use this log to roll back the problematic push more systematically.
Secondly, we’ll allow people to control access to individual Git repositories, as opposed to give them all or nothing access to the entire array of plugin repositories.
The Jenkins developers decided to continue the current open commit policy despite the incident to preserve our culture, which helped us navigate through this incident without a single argument nor flame war.FAQ Does everyone in the organization have full commit privileges to all the repositories?
Yes, with some exceptions. To encourage co-maintenance of plugins by different people, and to reduce the overhead of adding and removing people from our 1100+ repositories, we use one team that gives access to most repositories, and put committers in this team.I prevent forced push in my Git repositories. I’m safe from this trouble, right?
No, unfortunately something like this can still happen to you, as you can also accidentally delete branches. If you want to learn from our mistakes, you should definitely enable server-side reflog, to track ref updating activities. “git config core.logAllRefUpdates true” on the server will enable this.Can’t you just have people with up-to-date copy push their repos and fix it?
This is indeed how some of the repositories got fixed right away, where some individuals are clearly in charge and are known to have the up-to-date local repositories. But this by itself was not sufficient for an incident of this magnitude. Some repositories are co-maintained by multiple people, and none of them are certain if he/she was the last one to push a change. Many plugin developers just scratch their own itch and do not closely monitor the Jenkins dev list. We needed to systematically ensure that all the commits are intact across all the branches in all the affected repositories.Can’t you just roll back the problematic change?
Most mistakes in Git can be rolled back, but unfortunately ref update is the one operation in Git that’s not version controlled. As such Git has no general-purpose command to roll back arbitrary push operation. The closest equivalent is reflog, which offers the audit trail that Git offers for resolving those cases. But that requires direct access on the server, which is not available on GitHub. But yes, this problem would not have happened if we were hosting our own Git repositories, or using Subversion for example.
This is a guest post from Alyssa Tong, who drives JUC organizations around the world.
If you missed JUC Palo Alto on Oct 23, 2013 the videos are now available.
We are off to planning JUC 2014. It is hard to believe this will be the 4th annual JUC in the Bay Area. The growth in the Jenkins community since the first JUC is astounding.
Every year we are in search of a larger venue to accommodate the larger crowd. For 2014, the challenge of finding a venue for a capacity of 500+ attendees at a low cost will prove even more daunting. We would love to hear your suggestions for low cost venues (in the Bay Area) so that we may continue to keep entry cost low while providing convenience and the highest level of Jenkins education to attendees. Please send suggestion(s) to email@example.com
We are proud to launch the call for volunteers to join the JUC organizing committee (OC). If you are interested in shaping the 4th edition of this great event, please send email to firstname.lastname@example.org
We encourage you to share this blog within your network in case other people would be interested in joining the JUC OC or have ideas for a great JUC 2014 location.
In the hope of streamlining account creation e-mail delivery and mailing list moderations, I have deployed SPF and DKIM over the weekend for e-mails coming out of @jenkins-ci.org, which includes account appliations, Confluence, and JIRA.
I've also used this opportunity to switch back the sender of JIRA notifications to email@example.com. It was originally this way, then changed to firstname.lastname@example.org when someone complained (on what ground I do not remember any more.)
To the degree that I have tested the setup, it is working correctly, but if you notice anything strange, please let me know.
In this Open Space Technology style event, we went over war stories from users. Just to show the degree of seriousness, some of those people run 1500+ slaves, and others run Jenkins in HA configuration with a data center fail over! We then picked various topics in the afternoon and discussed what people would like to see to make Jenkins scale further. Slides and raw notes from this meeting is available here.
The event allowed me to rethink and revisit what I thought we should do in coming days in the area of scalability.
The event was far more popular than we anticipated originally, and we had to turn down many folks. So I'm going to do a webinar to go over what we did, and what we talked about. If you are interested in this area, and want to see what's being considered and provide your thoughts, please join us on Nov 19th 10am PT.
This post is a little off topic but is so seriously cool it has to be covered.
In the past, to perform cross-browser, or better yet cross-platform testing, you would install the Selenium Grid plugin into your Jenkins master and have a number of physical (or virtual) machines with different browsers installed on them and set up some tags so that the Selenium Grid could find them. Once all this infrastructure was set up you would write your Selenium tests.
Side Note: We're all doing cross-platform testing as part of our builds right? If you answered "Wrong!" like Arnie in 1985's Commando. I highly recommend that you integrate cross-platform testing in all your web application builds before the 11 hours is up (if you're confused, watch the movie).
While the traditional approach of in-house testing slave infrastructure is still valid and might be necessary for your organisation, with Browserstack you can put all the infrastructure in the cloud. They take care of it for you. Browserstack makes it really easy. They essentially provide the functionality of Selenium Grid, the test slaves and tags.
There are two main ways of interacting with browserstack: Automate and ScreenshotsAPI. There is "Live" testing but in my opinion its only really useful for concept demonstrations. Its not programmatic like the first two and we're interested in the programmatic stuff. Responsive appears to me to be the same as ScreenshotsAPI.
They support a ton of browsers and platforms and you can even test non-public facing urls (like test environments) with the web tunnel functionality. DANGER WILL ROBINSON: Be careful to note that the browsers and platforms supported by ScreenshotsAPI is not the same as Automate. They are slightly different (at least that is the case at the time of writing).
Happily, all this leaves you to focus on writing your tests. With only a few tests written, the effect of configuring just some of the supported platforms means that in a short amount of time you find yourself with a test suite with hundreds of combinations. Tests X Browsers X Operating Systems. :D
I found that while they give you automate code examples (Python, Ruby, Java, C#, PHP, Perl, Node.js) which are a helpful bootstrap it takes a bit of work to understand exactly what you can and can't do. For instance, I tried for 2 days get a batch file to call the third-party ScreenShooter tool repeatedly. Even with the /w argument it won't work after the first iteration (at least I haven't been able to yet.). BrowserStack's support team advised me that their servers limit one screenshot call at a time. You won't find that in their documentation.
While the documentation on the BrowserStack website isn't too bad I feel it is lacking in some areas.
I wrote stacked-browsers to pull all the threads together and make your lives easier...that...and my current project needed it.
stacked-browsers implements different flavors of integration with BrowserStack: Java, CSharp and Ruby (via ScreenShooter. I am yet to get the Screenshots API Ruby Library to work). If you would like to implement testers for the other languages feel free to send me a pull request. The structured readme at each level of the source tree should explain everything from setup to building to running. Take note of the batch files I wrote to make these steps easy.
It is cool to execute the TestRunner from the command line and see the test session appear almost immediately on Browserstack.com on the Automate page with the name you specified.
Browserstack's example code has 16 degrees of parallelism. I noticed in the top corner of the account I was using that they afforded me 5 degrees of parallelism. I had mixed results with this. With 2, I only seemed to have issues with one of the one of the Samsung android platforms, with 5 set, I had 8 tests running at times and had more timeout issues. I also noticed after about 10 minutes of my session running that this dropped to 2 test running anyway. I concluded that BrowserStack was throttling my session. I asked about this but their support team are yet to confirm or deny.
For a quick win, I chose Java Parallel JUnit (scroll down to "JUnit test for running in parallel browsers" for the helper class) as the initial mode of choice, but I expect the other flavors would work equally easily. Overall, I am very impressed with how smoothly BrowserStack works and would recommend it to anyone wanting to do cross-platform testing.
DISCLAIMER: I am neither employed by BrowserStack nor being paid for these comments (I am not a Sydney radio host...you know what I mean). I just think it works really well.
Oh, and if anyone manages to write a batch file that successfully calls the ScreenShooter tool or implement a client for the Screenshots API Ruby Library (or any other improvements) send me a pull request for the stacked-browsers repo.
I hope you have found this useful.
Till next time...
UPDATE: Unfortunately I have had to move the stacked-browsers repo to a private repository. My employer said it was too valuable...!
There are two ways to build a Maven project with Jenkins*
- Use a free-style project with a Maven build step
- Use a Maven-style project
- It has very attractive because is easy to configure (so users use it) and gives nice per-module reports
- When it blows up, and it will blow up, it blows up big
* well actually three ways if you include the literate job type
(This is a guest post by Alyssa Tong, the lead coordinator of Jenkins User Conference)
Our 3rd annual Jenkins User Conference in the Bay Area being held next Wednesday in Palo Alto is booked fully to the capacity and we couldn’t be more excited for this event! It’s going to be an amazing day of learning, talking to technology experts, networking with other Jenkins users, seeing cool demos and finding out how you can contribute to the Jenkins open source projects.
This event is being held at the Oshman Jewish Community Center and registration begins at 8am. There will be breakfast and plenty of coffee to get you caffeinated. Welcoming announcement will begin sharply at 9am and the keynote address follows shortly after. We’re so excited to have thirteen sponsors investing in and supporting the Jenkins community in this continuous integration space.
New this year, there will be BoF sessions so be sure to sign up for your preferred discussion at check-in. Or suggest a topic by leaving your suggestion in the comments section below. Let us know what Jenkins topic(s) is near and dear to your heart.
For those who missed out on purchasing your ticket or are unable to attend, we are happy to offer the live stream of Track 1. You can choose to watch the entire track or just specific session(s). Either way don’t forget to chat and tweet. We will also tweet live from the conference so you can follow along that way as well. Follow @jenkinsconf for the latest updates.
Thank you to everyone for making this sold-out event possible.
Can’t wait to see everyone on Wednesday!
(This is a guest post from Gareth Bowles, a Senior Software Engineer at Netflix.)
Jenkins has been a central part of the Netflix build and deploy infrastructure for several years now, and we've been attending and speaking at JUC since it started in 2011. It's a great opportunity to meet people who are as passionate about build, test and deployment automation as we are - although as Kohsuke said last year, having all those folks in one place could be dangerous if there's an earthquake !
CloudBees and the JUC Organizing Committee have put another great program together this year. We'll be doing two talks. Justin Ryan and Curt Patrick will present "Configuration as Code: Adoption of the Job DSL Plugin at Netflix", describing how we're shifting our users from manual job configuration via the UI, to defining their jobs as Groovy code using the Job DSL plugin. Justin and Curt will describe how Netflix development teams can now create and maintain complex sets of jobs for their projects with the bare minimum of coding.
In my lightning talk "Managing Jenkins with Jenkins", I'll go over how we use Jenkins' system Groovy scripts to maintain and monitor our Jenkins masters at a scale that couldn't be achieved with manual processes, and without the overhead of writing custom plugins.
As usual, there will be a whole crew of Netflix engineers at JUC this year. If you're interested in working on build and deployment at Netflix scale, find one of us (we'll all be wearing Netflix gear) to learn more - we're hiring !
(This is a guest post from Owen B. Mehegan aka autojack)
The Jenkins User Conference - Palo Alto is coming up on October 23rd! The schedule for talks is full, but we've been looking for a way to give other members of the Jenkins community some visibility. There are many people who have contributed to the project in various ways, whether it's contributing to core, developing plugins, writing documentation or just helping new users.
If this sounds like you, we're interested in giving you 10-15 minutes to talk to the rest of the conference attendees! The format is currently undefined and may be left up to you. You could do a Q&A, talk about features you've worked on and why they were important to you, or just offer some "pro tips" that you've developed based on your experience. The main point is to help put faces to some of the names in the community, and also help encourage others to contribute themselves! We're thinking of having these sessions during lunch and the exhibit hour (see here for the schedule).
If you're interested in this, or know someone else who might be that I could contact, please let me (owen at nerdnetworks dot org) know! If we can get some critical mass around it then we'll go ahead.
(This is a guest post by Stephen Connolly)
Every developer, at some stage, will be handed a project to maintain that somebody else was responsible for. If you are lucky, the developer will not have left the organization yet and you get a brief Knowledge Transfer as the developer packs up their desk before heading on to their new job. If you are unlucky, you don't even get given the details of where the source code is hiding.
Now begins the detective work, as you try to figure out how to build and release the project, set up Jenkins jobs to build the project and run the tests…
It doesn't have to be this way, you know!
What if I told you there was a file sitting at the top level that told you exactly how to build the project and do the important things? You'd be interested, wouldn't you?
When I tell you it's the README file? “But that's all lies. Nobody keeps that up to date. Argh!!!”
But what if Jenkins reads the README file and uses it for the build definition? Now you not only have a CI system ensuring that the build definition is correct, but you have less work to do setting up the job.
What if, because the build definition is now in Source Control, you can have Jenkins create jobs for each branch with ease? The joy of cheap branches that modern source control systems such as GIT and Mercurial give us, no longer comes with the pain of having to create Jenkins jobs for each branch (and more pain having to remember to tidy up when the branch is gone.)
That is the promise delivered by the Literate plugin.How does it work?
First of all, because Jenkins will be looking at all your branches, you need a way to tell Jenkins which branches it makes sense to try and build. For example, if your project lives on GitHub, you are unlikely to want the gh-pages branch to be treated like a branch of your actual code, and there are going to be branches that have a README file, but not one that Jenkins understands, so we will want to ignore them too.
You tell Jenkins that a branch is one to build by putting a marker file in the root of the branch. By default the marker file is called .cloudbees.md. If the marker file is present and empty, then the literate job type will assume the build instructions are in README.md. If the marker file is present and has build instructions, then the literate job type will just use those instructions.
In order to make it easy to provide the instructions, there is rather minimal formatting requirements for a literate description of a project's build commands.
The minimal description is just a section with the word build and a verbatim code block in that section. Here is the obligatory minimal “hello world” project description:# Build echo hello world
or if you don't like indenting you could use the GitHub style triple-back-tick# Build ``` echo hello world ```
Part of what makes this a literate style of build description is that you can freely intersperse the description of what and why the commands do with the actual commands, e.g.# Build We will greet the world with our great literate project description echo -n "Hello" Now that we have announced our intention to greet some people, we need to qualify exactly who we are greeting echo " world" That was just perfect. Time for a cup of tea
The first section heading containing the word build identifies the section that is assumed to be the build instructions. (The keyword that is searched for is configurable, but not yet exposed in the literate plugin's UI). The following is also a valid README.md for printing hello world:Our super hello world project ============================= This is a project to say hello to the world How to build ------------ You can build this project by running the following command: echo hello world Credits ------- This project would not have been possible without the existence of Assam loose leaf tea.
Now this is all very well, but what about if you need different instructions for building on Windows versus on Linux, and for that matter how does Jenkins know where we should build this project. Plus Mr Joe Random needs to know what he needs to install on his machine to build it for himself.
The first section containing the word environment identifies the section that contains the details of the environments to run the build on.Hello world project =================== This is a simple hello world literate project Environment requirements ------------------------ The project is built and tested by Jenkins on the following build environments, so it is known that the build instructions work on the following environments: * `windows` * `linux` How to build ------------ The build instructions are platform dependent: * On `windows`: echo "hello world" * On `linux`: echo hello\ world
When Jenkins sees bullet points in the environment section it assumes each bullet point corresponds to an environment to run the build on. Each environment is specified by at least one code snippet which helps define the requirements of the environment. By default Jenkins will look for tool installers with the same name as the labels. If it cannot find any matching tool installers it assumes that the labels are Jenkins slave node labels. (The strategy is plugable, but not yet exposed in the UI of literate builds)
When you have multiple environments on which to build and test, you have two choices on your build instructions. You can either:
- Have one and only one set of commands that work on all environments; or
- Have bullet points that cover all the specified environments.
So for example, if you are building on the following environments:
- windows, java-1.6, ant-1.7
- windows, java-1.6, ant-1.8
- windows, java-1.7, ant-1.8
- linux, java-1.7, ant-1.7
- linux, java-1.7, ant-1.8
You need to have bullet points in your build section that can match each of those options, but as long as there is a match for every option you are ok. So for example:ANT version finder ================== Finding out the version of ANT on various platforms Environments ------------ Nesting bullet points multiplies out the options * `windows` * `java-1.6` * `ant-1.6` * `ant-1.7` * `java-1.7`, `ant-1.8` * `linux`, `java-1.7` * `ant-1.7` * `ant-1.8` Build ----- The first match with the highest number of matches wins, so we want windows to get special treatment: * `windows` call ant.bat -version * `java-1.7` ant -version We could have picked `linux` for the above if we wanted, but this matching will have the same effect and better illustrates how matching works.
That is a mostly complete detail of how the build and environment sections work. In general everything except verbatim code blocks and bullet points with code snippets get ignored.
There are other sections that the literate project type allows for, these are called “task” sections. We haven't written the code to support them yet, but the idea is that these will work a bit like basic build promotions with the promoted builds plugin. There will be a UI in Jenkins that lets you kick off any of the task sections that you define as being valid for the job type, in pretty much exactly the same was as the promoted builds plugin works.
After that, everything else in the README.md is ignored.How do I get the test results into Jenkins?
Jenkins is not just about build and test. A lot of the utility in Jenkins comes from the additional reporting plugins that are available for Jenkins. (The build step ones are less relevant with literate style projects because you want to give people consuming the content instructions they can also follow)
So there is additional metadata about your project that you want to give to Jenkins. We put that metadata into a folder called .jenkins in the root of your source control.
There are two levels of integration that a Publisher/Notifier can have with the literate project type. The first level is a basic XML description of the plugin configuration. If you have ever looked at the config.xml of a Jenkins job, you will be familiar with this format.
If we have a Maven project and we want to collect the Unit test results in Jenkins we might have a README.md like this:Maven project with tests ======================== Environments ------------ * `java-1.7`, `maven-3.0.5` Build ----- ``` mvn clean verify ```
And then we create a .jenkins/hudson.tasks.junit.JUnitResultArchiver.xml file with the following:<hudson.tasks.junit.JUnitResultArchiver> <testResults>**/target/surefire-reports/*.xml, **/target/failsafe-reports/*.xml</testResults> <keepLongStdio>true</keepLongStdio> <testDataPublishers/> </hudson.tasks.junit.JUnitResultArchiver>
The literate plugin adds an Action to all Free-style projects that allows exporting these XML configuration snippets in a .zip file for unpacking into your project's source control. Each publisher/notifier has its own file, so it should be easy to mix and match configuration across different projects and enable/disable specific publishers just by adding/removing each publisher's file.
The XML itself can be a bit ugly, so there is a second level integration, where a Publisher/Notifier plugin can implement its own DSL. The literate plugin ships with two such DSLs. One for archiving artifacts and the other for JUnit test results. So the above XML file could be replaced by a .jenkins/junit.lst file with the following contents**/target/surefire-reports/*.xml **/target/failsafe-reports/*.xml Not everything makes sense in source control though…
There are always going to be things that you need to configure in Jenkins. So for example there may be some sources of branches that you don't trust. A good example would be pull requests on GitHub. We have a concept of branch properties in the literate project type that will allow defining what exactly a trusted branch source should be allowed do and what an untrusted branch source should be allowed do. It does not make sense for that information to be embedded within the untrusted branch itself.
Similarly coordination between different Jenkins projects is something that does not make sense in source control. The names of those Jenkins projects (and even their existence) is not knowable from source control. It does not make sense to keep that information in source control.
Information about how to map the description of the build environment in the README.md file to the build environments available to Jenkins does not make sense in source control because your Jenkins node configuration details may change over time.
In other words, literate projects do not remove the need to configure things in Jenkins. They do however remove a lot of the need, and especially the need to tweak the exact build commands and the location of where build results should be picked up from.What's not done yet?
Here is a list of some things I want to see for literate builds:
- A literate build step so that people can use some of the literate magic in their free-style projects while they migrate them to literate-style
- Support for literate task promotion flows (I think Kohsuke has signed up to help deliver this)
- Exposing the configuration points such as the marker file name (a global config option as well as per-project override) and the keywords to search for in the README.md (this is mostly UI work)
- Adding in some support for other markup languages (I'd really like to see AsciiDoc formatted README parsing, e.g. README.asc)
- Branch properties for untrusted builds (to do things like restrict the build execution to one explicit environment, put an elastic build timeout in place, wrap the shell commands in a chroot jail, etc)
- Branch properties for build secrets (So that the production and staging branches can get the keys to deploy into their respective environments.
- Collapsing the intermediate level in the UI when there is only one build environment.
- Eliminating the double SCM checkout when the backing SCM supports the SCMFileSystem API so that builds work even faster
- Reusing the GIT repository cache when using GIT branch sources.
- Some nicer integration with GitHub (I have most of this done, but I think it would be irresponsible to release this without having the Untrusted branch properties implemented as otherwise Pull Requests could become a vector for abuse)
- Finishing the support for Subversion credentials migration from the legacy credentials storage mechanism to the new Credentials plugin storage mechanism (not strictly literate project related, but Subversion is still a popular SCM and until this gets done we cannot release a version of the Subversion plugin with literate project support)
- Adding nice DSLs for all the Publishers and Notifiers
- Adding SCM support to all the SCM plugins
- Adding branch property support for the Build Wrapper / Build Environment / Job Property plugins where that makes sense.
Having said all that, the core functionality works right now for GIT/Subversion/Mercurial on Jenkins 1.509+, and it is only by playing with this functionality that you can see how this could change the way you use Jenkins.How do I try this out myself
Last week Kohsuke set up a new “Experimental” update center in Jenkins OSS. The reason for this new update center is that we have a lot of (potentially disruptive) changes to many plugins and if we started cutting releases, users may get annoyed if they end up upgrading to these plugins until they have all been better tested.
The “Experimental” update center includes plugins that have alpha or beta in their version number, while the other update centers now exclude those plugin versions.
So if you want to play with these plugins you need to change your Jenkins instance's update center URI to:http://updates.jenkins-ci.org/experimental/update-center.json
I would recommend that you use a test Jenkins instance for playing with.
(WARNING: shameless plug) You could also just fire up a Jenkins in the cloud using CloudBee's DEV@cloud service and follow these handy instructions to enable access to the experimental plugins:
The 10 best bug reports on literate builds before the Jenkins User Conference next month will receive a prise from CloudBees, Inc. I was able to get a commitment that the prise would be at least a T-shirt. I am hoping to get some more swag added to the prize pool. CloudBees employees or relatives of CloudBees employees are not eligible for the bug report prise!
Lately there has been several cases where we wanted to deliver beta versions of the new plugins to interested users.
To simplify this, we have created a new "experimental" update center, where alpha and beta releases of plugins will be available. Users who are interested in downloading them can go to "Plugin Manager", then to the "advanced" tab, and type in http://updates.jenkins-ci.org/experimental/update-center.json in the update center URL field.
When you are looking for the "available" tab, plugins that are experimental are marked accordingly to help you decide which ones to install. Once you install the beta plugins that you wanted, you can switch back to the default http://updates.jenkins-ci.org/update-center.json update center.
If you are developing plugins and you want to distribute experimental plugins, all you have to do is to put "alpha" or "beta" in the version number of pom.xml. The backend infrastructure takes care of the rest.
The latest edition of Continuous Information is out for your reading pleasure.
- Health Check-up for Jenkins: Kohsuke’s Tips on Keeping Jenkins Happy
- Jenkins continues to take over the world, with more than 65,000 active installations and more than 800 plugins
- Events: Jenkins User Conference – 10/23 in Palo Alto, CA (use discount code BEE-JUC); Jenkins Scalability Summit 10/24; and more
- Jenkins made the SD Times 2013 Top 100!
- What’s new in Jenkins? The hottest new Jenkins improvements…
- How to build your own Jenkins Traffic Light
PS - We love contributions to Continuous Information, so if you have a Jenkins tip, trick, or plugin you’d like to feature, please email us.
The Jenkins User Conference (JUC) Palo Alto is less than two months away!
The organizing committee, 13 sponsors and 16 speakers have been hard at work coordinating a fun and educational day for the Jenkins community on October 23. Check out the agenda and see for yourself! Speakers are traveling from around the globe to take part in this conference, including a number of usual suspects. Dedicated Jenkins experts are coming in from London, Israel, Estonia, Sweden, Taiwan, Boston, Seattle, Texas and, of course, the local Bay Area.
New this year, we’ll live stream an entire track, courtesy of our Silver sponsor, Confreaks.
In keeping with tradition, every year we create a one-of-a-kind Jenkins t-shirt for JUC attendees. This year we are sticking with the ever-popular landmark of Palo Alto, Stanford University. And we are going bright…hope you like (Jenkins) red!
We are always on the look out for unique and creative ideas for Jenkins t-shirt designs. If you have a cool design in mind please send it to email@example.com. You may just see the Jenkins community wearing your design at next year’s conference.
Also check out the great Jenkins collectible that CloudBees, the Platinum sponsor, is giving out at the CloudBees table (I heard he looks even better in person). Quantity is limited so be sure to pick one up at the CloudBees table. You might have to sing, dance, bark or just complete a survey in exchange for the Jenkins bobble head. Most importantly, don’t forget to have Kohsuke sign it to make it official.
JUC isn’t complete without some good BEvERages. Gold sponsor Black Diamond Software is ponying up a keg of beer after the conference. Leave us a comment (below) about what kind of beer strikes your fancy and it might just be there.
If you’ve read this entire blog and have not yet registered to attend, here’s additional incentive for you. Use discount code BEE-JUC to get early bird pricing, that’s a $26 saving off the current price of $80. Discount expires October 4, 2013.
As JUC Conference Chair, I am always looking for ways to improve JUC. Leave your comments below on ways we can make this ‘Your’ conference.
Looking forward to seeing you at JUC on October 23.
JUC Conference Chair
This is a guest post by Mike Rowan, VP R&D at SendGrid.
Q: Tell us a bit about what your service and plugin do. Who is it for? What are the highlights of your plugin?
A: Loader.io is a simple-to-use cloud-based load testing service. The service is designed for developers and people who need to ensure applications are performing as they should. It allows developers to perform large-scale load tests on demand, which lets them understand the scalability and performance of their applications. We realize Jenkins is the preferred build service for a lot of our users, and we know providing a way for them to implement, measure and improve application performance during the continuous build cycle is important. So we wrote a Jenkins plugin that allows load testing to be brought into the continuous build and deployment process with ease.
Q: Did you have to convince your boss/lawyers to open-source your plugin? What was the pitch?
A: No, at SendGrid our focus is always to help make developers’ lives easier, and when we can, we like to provide tools that they can hack on. Since the Jenkins platform is itself an open source project, following the same model to provide our plugin made perfect sense. In addition, we encourage others to build on our work, help improve it and ultimately make it better for everyone using it.
Q: How did you learn how to write a plugin?
A: We use the Jenkins platform ourselves, and we leverage a number of the plugins available. Having access to these and the Jenkins documentation gave us a great head start. It was an easy decision to write the Jenkins plugin for loader.io, and the Jenkins community provided both detailed instructions as well as support when we needed it.
Q: Any gotchas in the experience of developing a plugin that you want to share?
A: The overall process of developing the plugin was straightforward and simple, but we did run into some scope creep in the middle of the dev process. We found that since the platform was so easy to write for, it made us keep adding more and more features. Usually this is good, but in the case of our project, we wanted to provide the most value as quickly as possible. So we scaled back, focused on solid execution for the most important features, and are already preparing to launch a new version with the things we reserved for post v1 availability.
Q: What is the reaction from users so far?
A: The users we’ve spoken with love the plugin. In addition we’ve already gotten great feedback from some community members on “nice to have’s” in the plugin, some of which we’re already working on.
Q: What tips do you share to those who are interested in writing plugins?
A: If you have a service that provides value in the build, deployment and post deployment process, then you should be writing a Jenkins plugin. Two things are important for anyone writing a plugin: 1) be sure the plugin you’re writing is going to provide true value (if you need it yourself this is a good sign), and 2) make sure you understand the scope of the project and deliver core features and value first, then focus on some extra things. Providing a valuable plugin sooner than later will help you identify all the right additional features to include, especially when collecting live community feedback.
Some of the things we focused on early in the process were to identify the core features, and more importantly to make it very easy for users of Jenkins to install, use and interpret the loader.io plugin and results. We wanted to allow users to leverage our plugin for multiple environments and builds with system and global credentials. To do this, we decided to make use of the Credentials plugin (https://wiki.jenkins-ci.org/display/JENKINS/Credentials+Plugin), which is a heavily-adopted plugin that provides a standardized API for plugins to store and retrieve credentials. This plugin allows our users to add and use different credentials in one single Jenkins environment. In addition, we created a new re-run feature which, when used with continuous build and testing, provides a deep view into the performance of an application over time. Finally, we wanted to bring the same UI experience users have in our environment into Jenkins, which we did by preserving the load test report model and making it function the same in the Jenkins UI. Doing this makes it easy for users to have consistency between the UIs and more easily understand the results regardless of where they’re viewing them.
It’s very easy to write a Jenkins plugin - I hope these insights will encourage you to write your own.
ps - We’d love your feedback too. Check out our newly-released Jenkins plugin for loader.io and let us know what you think.
This is a guest post by Aske Olsson
Extreme feedback is an incredibly powerful way to drive quality and accelerate your developer fast feedback loop.
Having eXtreme Feedback Devices (XFDs) hooked up to your Jenkins jobs gives everyone on your team instant insight into the current software state. At customer after customer we've seen extreme feedback devices drive significant incremental productivity gains, so about a year ago we started talking about taking the concept mainstream and making it easily available to any development team. So, as a small side-project, we've decided to scratch our own itch and developed an easy-to-deploy, Linux-based, laser-cut, extreme feedback device, specifically designed for Jenkins. It infers a feeling of urgency when the build is broken, and a better sense of a achievement once the problem is fixed. Just connect the XFD to your network, install the "extreme feedback plugin" on your Jenkins server and configure which jobs to feedback extremely.
At the Jenkins Code Camp in Copenhagen today (with Kohsuke) we've made the lamp speak the name of the developer who broke the build :), improved the plugin's UI in Jenkins, and gotten the the lamp's display to list all the developers who contributed to the last change. Of course you can contribute too, just fork the repositories at here and here and create a pull request.
If you're interested in trying out extreme feedback in your own team you can order your own XFD lamp