# One less Pull Request

This time of the year again, it's soon going to be the period where many websites and organisation will push you to make contribution to Open-Source, for example via hacktoberfest I got a nice T-shirt last year, and 24pullrequests seem to get tractions as well each years. Theses are really nice incentive that push users of open-source to start contributing and already seasons developers to try touch new project.

Here is a request I got for you whether you participate or not to these events: Please close a Pull Request.

## Less is More

While I really appreciate having new contributions, there is a point were too many opened pull-requests can – I think – be harmful. I'm going to expose the various case, why I think these are harmful and what can be done.

Here are two specific examples : the Sympy Project (as Aaron feel targeted), the authors are absolutely extraordinary and reactive. The current count of opened PR is 378. Matplotlib is also apparently at 207. You can see in the discussion linked here that maintainers feel differently about high number of PRs.

### I open to many pull requests

I currently have 12 opened pull requests, see how many you have. This mean that I (at least) have to follow-up with around 12 projects every days. This is an extremely hight cognitive cost of switching. I try to not keep a PR older than 6 month. If it's older then it's most likely not going to be merged or taken care of by the maintainers. Every time I get to this screen I at least spend 30 sec wondering what to do about old PRs.

My advice is to stay focus: If you are not going to work on a Pull Request, let the maintainers know about this fact: close it. It can still be reopened. You might want to leave a message explaining why you are not working on it, and that you would be happy (or not), for someone else to take over.

I'm now back to 8. It fits on one screen, I can be more focused.

Also if you are a maintainer and know a pull-request will likely not get merged, I would prefer you don't give me false hope, and close it. Explain why. Even if it's just that's you are busy on something else and would appreciate if this was resubmitted later. I'm more likely to get over it and try a few other time than if my first contribution got no responses.

### I receive too many pull-requests

I strongly encourage you to try minrk.github.io/all-my-pulls it allows you to view all the pull-requests you have the ability to merge. And filter by repositories you do not wish to see. After filtering, I have 61 pull requests in 19 repos. It is too much to stay focused as well.

Many of these pull-requests have stalled, and I would gladly appreciate for the authors to close them if they have no intention on working on things. To be honest many of the oldest pull-requests have entered this "Awkward state" of wanting to close it but not actually doing so because it can be rough for the author to see his work dismiss.

As a maintainer I should do a better job as saying when a Pull request have stalled and is just polluting the PR list. Close it with a nice explanation. It's always possible to reopen if needed. GitHub allows canned responses, I use it as a template to list the policy of PR closing. I've found that having a clear policy often make decision easier. And sometime closing even allow work to be resubmitted, to appear on the top of the pile, and start anew.

There is also the possibility of taking over the author work and finishing up in a separate PR, or push directly on authors forks if he is allowing it. I personally rarely do that, as I feel like it is a slippery slope for the maintainer to do everything.

I find myself much more efficient when there is only 5 to 6 opened pull-requests. I can keep track of each of them, judge whether or not the work will conflict and give proper care to each of these. I fail to do so when there are many pages.

### I don't contribute to repository that have too many PRs.

When I come across a repository with more than 20-ish pull-requests, I tend to think that the authors are not responding so why bother to contribute. I know that often these are only impressions and I can get over it because I have the chance to often know the maintainers. This feeling is though hard to get over on repositories I'm new to.

With a high number of opened PRs, I tend to also be discouraged at searching whether someone is fixing the bug I saw, or implementing the feature I wish. Moreover the higher the number of opened PRs the more chance there is for the maintainers to review my PR in a long time, and the higher chance there will be that I will need to rebase my work, which regardless of whether you are a git master or not can be painful process to go through (and to ask someone to go through).

I'm pretty certain I'm not the only one to be discouraged from seeing a large number of open non active Pull requests. I've asked on twitter and it looks like roughly every other respondent are discouraged to contribute if too many PR are opened.

## What do you think ?

The above paragraphs are my though on too many opened pull-requests ? How are you feeling about that ? As you might have read in the twitter conversation linked to above, different people have different opinions.

If you want to comment, please open an issue on GitHub, and if you have the courage to help improve my English feel free to send me a PR (sic) to make this more readable.

## Close a PR !

Thanks you for reading up until here ! If you want to restore part of the sanity of some maintainers, or want to appeal a bit more to some users, please go close a PRs ! Or help finish a Pr that have stalled ! I can't give you a free T-shirt like for HactoberFest but feel free to tweet with hashtag #IClosedAPR !

Note, I started this writing in September, it took me a while to finish, so things might have changed a bit during the writing.

This is a follow on previous post about retiring Legacy Python. I got some response, in particular thanks to Konrad Hinsen who took the time to ping me on Twitter, and who responded in a GitHub issue.

First I want to apologize if the post felt condescending, it was not my intention, by re-reading it I still do not feel it. Maybe this is because I'm still not skilled enough with words in English, and this is typically the kind of feedback I appreciate.

### Don't break user code

One other thing that seem to leek through my post was the impression I was advocating breaking user code. What I am advocating is to make a step forward, by actually dropping compatibility with previous version. I really like semantic versioning, and as a tool maintainer, I would not drop support for APIs in between minor revisions. Keeping backward compatibility, is not always the easier job, and in a lot of case I fought hard and complain about changes that shouldn't have happen, and even reverted pull requests. As a developper I would have like these changes to go in though.

What I am advocating or is for libraries to start thinking of new API that would be Python 3 only. And potentially consider new major revision that does not support Legacy Python, or at least do not make efforts to support it. This is already the path that a growing number of tools are going through. Nikolas developers have decided that the next version (8.0) would be Python 3 only (more background). I suppose the necessary modification would be minimal, but the overhead and developer time to support Legacy Python at the same time was too much. Scikit-Bio is envisaging the same thing: whether or not they should stop trying to support python 2. Some project have even already started active removal of Legacy Python compatibility layer.

Last version of anaconda can be installed with the default root environment being Python 3.5. Some people do really awesome tools which are Python 3.4+ only.

An increasing number of distribution also now come with Python 3, and python 3.5 is now considered as default installed Python more and more. Ubuntu is considering having Legacy Python as persona non grata on next stable release, and so does Fedora apparently.

Changing major library versions would not break old code. Or at least not more than any other upgrade of major versions. User would still be able to pin version of the libraries in use, and if there are dependencies conflict, your package manager i there to resolve them. There is a specific requires_python metadata that you can give to your package to indicate whether or not it's Python2/3 compatible (including minor versions), which would just prevent non-compatible version to be installed, though it is not commonly use. This is one the reason the Py2/Py3 transition might look complicated to some people. One example is fabric, which is not compatible Python3, but will accept to install in a Python3 environment. In a perfect world you would just pip install your package and get the last compatible version without worrying about the Legacy Python/Python 3 compatibility, but we venture in Packaging, so here be dragons.

### Python 3 does improve on Python 2

Even if, as a scientist, you might see Python as only a tool; this tool rely on a lot of work done by other. You might not feel the difference on the day to day basis when using Python, but , the feature that Python 3 provide do make a difference for the developers of software stack you use. During the last few month, we had the chance to get a new release of NumPy that support the new python 3 @ (aka __matmul__) operator, and a recent release of Pandas, that now support more operation with nogil. [Matplotlib] released its 1.5 version and should relatively soon release the 2.0 version that change the default color scheme. These are only some of the core packages that support the all Scientific Python infrastructure, and the support of both Legacy Python and Python 3 is a huge pain on developers. The support of a single major Python version for a code base make the life of maintainers much more easier, and Python 2.x branch is coming to end of life in 2020, we can do our best to allow these maintainer to drop the 2.x branch.

I really think that the features offered by Python 3 make maintaining packages easier, and if it takes me 10 minutes less to track a bug down because I can re-raise exceptions, or even get a bug report with a multiple-stage exception, it give me 10 more minutes to answer questions.

As a side note, If you think I'm exaggerating that Legacy Python support can be a huge overhead, I I've already spend a day on this bug and I'm giving up on writing a test that work on Python 2.

Also, for the record, re-raising exceptions, yield, keyword-only arguments are things that make the science I did during my PhD easier. Maybe no better, but easier for sure. I playing more and more with yield from, Python 3 Ast, generic unpacking since then, and they also do help (understand I wish they were there or I knew how to use them at the time). I strongly believe that the quality and the speed at which you do science is influenced by your environment, and Python 3 is a much nicer environment than Python 2 was. Of course there is a learning curve, and yes it is not easy: I went through it more than a year ago when it was harder than today. It was not fun nor was I convince after a week, but after spending some time with Python 3, I really feel impaired when having to go back to Legacy Python.

### Just one Python

So while not all scientist are seeing the immediate effect of Python 3, I would really hope for all the core packages to be able to just forget about Legacy Python compatibility and have the ability to focus their energy on only pushing their library forward. It would really help improving the scientific stack forward if the all SciPy related libraries could be released more often. And one of the way to help that is to push Python 3 forward. We won't get end-user to migrate to Python 3 if the list of available features of package are identical. The carrots are new features.

Mostly what I would like is only one Python. I honestly don't think that Python 2 has a long term future. New distribution

For the in house legacy code that scientist cannot update, I'm really sorry for them, and it would be nice to find a solution. Maybe something close to PyMetabiosis that would allow to run Legacy Python modules inside Python 3 ? I understand the lack of funding and/or technical competence/time to do the migration. Though asking a volunteers based project which also lack funding to maintain backward compatibility indefinitely seem also unreasonable. Not all project have finding like IPython/Jupyter, and even though, those which get funding, also have deliverable, if we don't ship the new feature we promise to our generous sponsors, we will likely not get out grants renewed. More generally this lead toward the question on software sustainability in science. Is it unreasonable to ask a specific software to work only in a restricted environment ? It is definitively convenient, and that's why many papers now rely on VMs for results to be reproduced. But if you want your software to be used you have to make it work on new hardware, on new OS, which often means drivers, newer library, so why not new version of the language you use ? How many user are still not upgrading from XP? Is it unreasonable to ask then to install the version of libraries that were distributed at that time?

A good single resource on how to operate the Python2 to 3 transition is likely needed. Matt Davis created an example repository on how to write a Python2 and 3 compatible extension. CFFI is in general something we should point user to as it is seem to become the way which, in the long term, will lead to the less pain for future upgrades, and even for more backward compatibility with 2.6. Cython also allows to write pseudo-python that can run closer to C-Speed. Cython also have a pure python mode, which I think is under-used. I suppose that with Python 3 type-hinting, function annotation could be used in the same way that Cython Pure Python mode magics atributes work. This to provide Cython with some of the needed type annotations to generate fast efficient code. How to tackle the human problem is more complicated. It is hard to organize a community to help scientific software being ported to Python 3, as everyone is missing time and/or money. There is a need to realise that in-house software written has a cost, and this cost at some point need to be financed by institution. The current cost is partially hidden as it goes into Graduate student and Post-Doc time, who in the end will write the software. The Graduate and Post-doc often lack the best practices of a good software engineer which leads to technical dept accumulation.

#### Which path forward

The Python Scientific community is still growing. Laboratories are still starting to adopt Python. Wether or not it is a good thing, Python 2 will reach it's end of life is a couple of year. And despite the many in-house libraries that still haven't been ported to Python 3, it is still completely possible and reasonable to do science with only a Python 3 stack. It is important to make this new generation of Python programmers to understand that the Python 3 choice is perfectly reasonable. In a perfect world the question of which python to use should get an obvious answer, which is Python 3. Of course with time passing, we will always have some necessity fro developers speaking Legacy Python, in the same way that Nasa is looking for programmer fluent in 60 Years-old languages

## Alternatives

### Reviving Python 2.7

I'm Playing devil advocate here, to try to understand how this could go forward. The scientific community is likely not the only one to use old version of Python. Google recommend to install Python 1.6 to run it's Real-time Google drive collaboration examples. So once 2.7 support will be officially dropped by the PSF, one can imagine trying to have Google (or any other company) taking over. I am not a lawyer, but I guess in a case of language revival, trademark might be an issue, so this new version might need to change name.

It is not the first time a fork of a project have became successful. From the top of my head I can think of Gnome 2, that gave birth to Mate, Gcc that was forked to Egcs, (which in the end was re-named Gcc), as well as Libre Office, successful alternative to it's "Open" sibling.

Seeing that the Scientific community has already a lack of time and funding, I find the chance of this happening slim.

### Leapfrog

One of the main pain in the transition to Python 3 is keeping compatibility with both Python. Maybe one of the solution that will come up is actually jumping over all this Python 2/3 problem to an alternative interpreter like PyPy, which got a brand new release. The promises of PyPy is to make your Python code as fast as C, removing most of the need to write lower level languages. Other alternative interpreter like Pyston and Pyjion are also trying to use Just-In-Time compilation to improve Python performance. For sure it still need some manual intervention, but if it can highly decrease the amount of work needed. I'm still unsure that this is a good solution as C-API/ABI compatibility CPython 2.x is a complex task that does hinder language development. More generally, CPython implementation details, do leak into the language definitions and make alternative implementation harder.

### Conversion tools

The Julia community dealt with migrating code from Fortran by writing a Fortran-to-julia transpiler, Python have the 2to3 and now python-modernize tools, it might be possible to write better conversion tools that handle common use case more seamlessly, or even import-hooks that would allow to import packages cross versions ? Maybe having a common AST module for all version of Python and have tools working more together is the way forward.

## Step forward

Make Python 3 adoption easier, or make it clearer what is not Python 3 compatible, both for humans and machine. There are some simple steps that can be taken: Make sure you favorite library run tests on latest python version, it's often not too much work.

Point the default docs at Python 3 using canonical links. I too often come across example that point to Python 2 docs, because they are better referenced by Google. Often the difference LPy/Py3 is small enough that example works, but when it does not, it give the impression that Python 3 is broken.

Tell us what you think, what you use, what you want, what you need, what you can't do.

This is a bit of an anger post, I won't explicitly give name, but if you know me you should be able to put the pieces together.

I'm angry for the reason that almost a year ago, I warn a known Londonian startup, that they had a large security flaw in their software. They responded that they were aware, and are working on a fix. Here we are almost a year later, the fix is still not implemented, all their user password and credit card number are potentially in the wild, and they are responding to me and twitter that basically I don't understand security.

I have a tip for them. If people disagree whether a system is secure or not, then the system is most likely not secure.

### Do you trust your postman ?

To be more accessible, I'll take the example of Alice and Bob communicating by snail mail, instead of use the https/ssl encryption jargon.

Alice and Bob have never met, though they are communicating regularly by mail. Bib is good in French, and Bob is helping her improve her French by improving her translation. To thanks Bob Alice decide to send him a gift, for now in the form of money. As Alice does not trust the postman to not look into the mail and steel cash, she decide to send a Paper Check. Though this does not prevent the postman to steel the check, though if Bob account number is written on the check, the postman will not be able to cash the check.

The solution implemented by above not mentioned company is the following:

Alice send a letter asking Bob for his account number. Alice receive a response with an account number. Alice write a check fro this account and send it to Bob's bank whose address was with the account number.

According to ANMC (Above Not Mentioned Company) this is secure as Alice does send a check with Bob bank account number to Bob bank.

So let's see what the postman can do:

The Postman intercept Alice's letter to Bob, asking for his account number. He does not even bother sending the main to Bob, and respond to Alice with a letter impersonating Bob, and having Postman's bank address and Postman's account number. Alice send a Paper check to the Postman's Bank without even realising Bob's was not receiving mail.

So in the end ANMC, your implementation is wrong. If you login page is on http, and you tell me that the password is send over https, nothing prevent me to Man in the middle your login form and have it submitted to a different address.

### I'm not a clown

This is basic security when you care a bit on what you are doing, and please be aware that people that will take the time to submit you such bug report, are not clowns and actually take time to explain you these things. Please respect them instead of raising some funds by venture capitals.

You get a statement on your website that you care about user privacy, apparently you don't.

### Thanks

As usual, I'm not a good at writing English so I appreciate any Pull Request that fix grammar, spelling and english expression.

# Planning an Early Death for Python 2

On September 18 and 19, 2015, the Data Structure for Data Science workshop gathered at UC Berkeley's BIDS [Berkeley Institute for Data Science]. It was a productive two days of presentation, discussion and working groups — a collaborative effort aimed at expanding what data science can do.

Despite having mostly Python developers, the workshop reached out and included members from many other programming communities (e.g., C, C++, Julia, R, etc.) as the workshop's explicit goal was to improve cross language operability. In particular, the goal was to enable python's scientific computing tools (numpy, scipy, pandas, etc.) to have a consensus backbone data-structure that would enable easier interaction with other programming languages.

Out of the discussion arose a topic that has long plagued the python community at large: code that requires legacy Python 2.7 is holding back the development of data-science toolsets and – by extension – the progress of data science as a whole. Python 2.7 was an important part of the history of scientific computing, but now it should be left as part of that history. Thus, we convened a small working group to plan a early death for Legacy Python.

### Move over Legacy Python once and for all.

In collaboration with many developers among whom @jiffyclub, @tacasswell, @kbarbary, @teoliphant, @pzwang, @ogrisel, we discussed different options to push Legacy Python more or less gently through the door. We understand that some people are still requiring the use of Legacy Python in their code base, or the use some libraries which are still only available on Legacy Python and we don't blame them. We understand that Legacy Python was a great language and that it's hard to move over it. Though the retirement of Legacy Python is 2020, you will ave to make the transition then, and it will be even harder to transition at that point.

So what are the step we can do to push the transition forward.

The choice of words you make on the internet and in real life will influence the vision people have for Legacy Python vs Python. Assume that Python 3 is just Python, and refer to Python 2 as legacy python. IDEs and TwitterSphere is starting to do that, join the movement.

Refer to Legacy Python in the past tense. It will reinforce the old and deprecated state of Legacy Python. I still don't understand why people would like to stay with a language which that many defects:

• it did not protect you from mixing Unicode and bytes,
• tripped you with integer division
• did not allow you to replace the printing function
• had a range object which is not memory efficient
• did not permit to re-raise exception
• forced you to repeat the current class in super() call.
• let you mix tab and space.
• did not support function annotations

Legacy Python was missing many other feature which are now part of Python.

Do not state what's better in Python 3, state that it was missing/broken in Legacy Python. Like the missing matrix multiplication operator was missing multiplication operator. Legacy Python was preventing people to use efficient numeric library which are relying on the numerical operator.

### Don't respond to "and on Python 2"

Personally during talks I plan to not pay attention to question regarding legacy Python, and will treat questions such questions as someone asking whether I support windows Vista. Next question please. The less you talk about Legacy Python the more you imply Legacy Python is not a thing anymore.

### Drop support for Legacy Python (at least on paper)

If you a library author, you have probably had to deal with user trying your software on Legacy Python, and spend lot of time making your codebase compatible with both Python 3 and legacy Python. There are a few step you can take to push user toward Python 3.

If a user is not willing to update to a new version on Python, and decide to stay on legacy Python, they can most likely pin the version on your library to versions which support Legacy Python.

#### Make your examples/documentation Python 3 only

Or at least do not make effort to have examples that work using Legacy Python. Sprinkle with function annotation, and async/await keyword can help with communicating your example are Python 3 only.

You can even avoid mention of Legacy Python in your documentation and assume your users are using Python 3, this will make writing documentation much easier, and increase the chances to get examples right.

#### Ask user to reproduce but on up-to-date Python version.

Have you ever had a bug report where you ask users to upgrade your libraries dependencies ? Do the same with Python. If a user make a bug report with Python 2.7 ask them if they can reproduce with an up-to date version of Python, even if the bug is obviously from your side. If they really can't upgrade they will know, if they do and can reproduce, then you'll have at least converted one user from Legacy Python (and in the meantime you might have already corrected the bug).

#### Defer 2.7 support to companies like Continuum

This is already what Nick Coghlan recommands for Python 2.6, and that's what you can do for Legacy Python fix. If you have a sufficient number of user which are asking for 2.7 support, accept the bug report, but as an open source maintainer do not work on it. You can partner with companies like Continuum or Enthought, from which user would "buy" 2.7 support for your libraries, in exchange of which the Companies could spend some of their developer time fixing your Legacy Python bugs.

After a quick discussion with Peter Wang, it would be possible, but details need to be worked on.

### Make Python 3 attractive

#### Create new features

Plan you new features explicitly for Python 3, even if the feature would be simple to make Legacy Python compatible, disable it on old platforms, and issue a warning indicating that the feature is not available on Legacy Python install.

You will be able to use all the shiny Python features which are lacking on Legacy Python like Unicode characters !

#### Create new Python packages

Make new packages Python 3 only, and make all the design decision you didn't do on your previous package. Pure python libraries are much easier to create and build once you are not hold back by legacy Python.

### Helping Other project

Despite all the good will in the world the Migration path from Legacy Python can be hard. There are still a lot of things that can be done to help current and new project to push forward the adoption of Python.

#### Testing

Make sure that all the project you care about have continuous integration on Python 3, if possible even the documentation building done with Python 3, help to make Python 3 the default.

With continuous integration, check that your favorites projects are tested on Python Nightly, most CI provider allow the tests to be ran on nightly, but do not make the status of the project turn red if the test are failing. See allow_faillure on Travis-CI for example.

#### Porting C-extensions, move to Cython

The path to migrate C-extension is not well documented, the preferred approach is to use CFFI, but there is still alack of well written centralised, document on how to integrate with Python 3. IF you are knowledgeable on this domain, your help is welcomed.

### The things we will (probably) not do.

Make a twitter account that shame people that use Legacy Python, though we might do a Parody account which say funny things, and push people toward Python 3.

Slow code on purpose and obviously on Legacy Python:

python import sys if sys.version_info[0] < 3: import time time.sleep(1)

Though it would be fun.

Ask user at IDE/CLI startup time if they want to upgrade to Python3:

>>> e(x = $z+10, w = 20)(30, 40) (×, [(×, [(+, [(+, [40, 10]), 30]), 20]), (+, [40, 10])]) >>> _() 80000 Give you unbound python expression that you can manipulate as you like. Pretty awesome, and would allow some neat meta programming. Woudl be nice if he could release his patch so that we could play with it. A discussion with Olivier Grisel (sikit-learn) raised the point that a good package to deal with deprecation would be nice. Not just deprecation warning, but a decorator/context manager that takes as parameter the version of the software at which point a feature is removed and warn you that you have soem code you can clean. Using grep to find deprecation warning don't always work, and sometime you would like a flag to just raise instead of printing a deprecation warning, or even do nothing when you are developpin aginst dev library. ### And on the code side ?¶ Jupyter/IPython 3.0 is on its way. Slowly, we are late as usual.Recap will come at the right time. I am working on integratin with Google Drive, and will try to add real-time synchronisation to Jupyter 1.0/IPython 4.0. IWe plan on addin other Operational Transform library backend, but workin on Google drive first got me my post doc. (Yes Google signed a Check to UC Berkeley) Of course new project mean more packagin, and as you all know Python packaging is painfull. Especially when starting a new project you need to get all the basic files...etc. Of course you can use project like audrayr/cookiecutter, but the you stil have the hassle to : • set up the GitHub repository, • Find a name • Hook Travis-CI find the annoyin setting that let you trigger build. • Hook up Readthedoc, • ... [1] • And so on and so forth [1] add everything that every project should do. I stronly believe that intelligence shoudl be in the package-manager, not the packager, and my few experiences with julia Pgk, and a small adventure with nodejs npm convinced me that there is really something wrong with Python packagin. I'm not for a complete rewrite of it, and I completly understand the need for really custom setup script, but the complexity just to create a python package is too high. SO as a week-end hack introducing... ### Pip Create¶ So yes I know that cookiecutter as pre- and post- hooks that might allow you to do that, but that's not the point. I just want a (few) simple command I can remember and which can do most of the heavy liftin for me. I shamelessly call it PipCreate, because I hope that at some point in the future you will be able to just do a $pip create to enerate a new package.

So to work beyond dependencies, it just need a GitHub Private token (and maybe that you once loin to TravisCI, but haven't tested). The best thin is, if you don't have github token availlble, it will even open the browser for you at the right page. Here are teh step:

$python3 -m pipcreate # need to create a real executable in future  Now either it grabs the github token from your keychain, or ask you for it and then does all the rest for you: $ python3 -m pipcreate
will use target dir /Users/bussonniermatthias/eraseme
Comparing "SoftLilly" to other existing package name...
SoftLilly seem to have a sufficiently specific name
Logged in on GitHub as  Matthias Bussonnier
Workin with repository Carreau/SoftLilly
Clonning github repository locally
Cloning into 'SoftLilly'...
warning: You appear to have cloned an empty repository.
Checking connectivity... done.
I am now in  /Users/bussonniermatthias/eraseme/SoftLilly
Travis user: Matthias Bussonnier
syncing Travis with Github, this can take a while...
.....
syncing done
Enabling travis hooks for this repository
Travis hook for this repository are now enabled.
Continuous interation test shoudl be triggerd everytime you push code to github
/Users/bussonniermatthias/eraseme/SoftLilly
[master (root-commit) f4a9a5d] 'initial commit of SoftLilly'
28 files changed, 1167 insertions(+)
create mode 100644 .editorconfig
create mode 100644 .gitignore
create mode 100644 .travis.yml
create mode 100644 AUTHORS.rst
create mode 100644 CONTRIBUTING.rst
create mode 100644 HISTORY.rst
create mode 100644 MANIFEST.in
create mode 100644 Makefile
create mode 100644 docs/Makefile
create mode 100644 docs/authors.rst
create mode 100755 docs/conf.py
create mode 100644 docs/contributing.rst
create mode 100644 docs/history.rst
create mode 100644 docs/index.rst
create mode 100644 docs/installation.rst
create mode 100644 docs/make.bat
create mode 100644 docs/usage.rst
create mode 100644 requirements.txt
create mode 100644 setup.cfg
create mode 100755 setup.py
create mode 100755 softlilly/__init__.py
create mode 100755 softlilly/softlilly.py
create mode 100755 tests/__init__.py
create mode 100755 tests/test_softlilly.py
create mode 100644 tox.ini
Counting objects: 32, done.
Delta compression using up to 4 threads.
Compressing objects: 100% (25/25), done.
Writing objects: 100% (32/32), 13.41 KiB | 0 bytes/s, done.
Total 32 (delta 0), reused 0 (delta 0)
To git@github.com:Carreau/SoftLilly.git
* [new branch]      master -> master


It finishes up by openin travis on the test page after a push so that you can see the first test passin after a few seconds.

Hope that will give you soem ideas and patch welcomes.

Anyway, late now, Have fun, PR if you see typos are welcommed too.

# PyCon 2014

## Pycon 2014¶

Almost two weeks ago was the beginning of PyCon 2014, I took a break durring the writting of my dissertation to attend, end even if I was a little concerned because it was my first non-scientific-centric Python conference that I attend, I am really happy to have made the choice to go.

I'll write what hopefully will be a (short) recap of (my view) of the conference. Maybe not in a perfect chronological order though.

### Getting There¶

So this year, PyCon Us was... in Montreal Canada, 7 hour flight from where I leave, in a seat narrower than me, and I'm not that wide. Hopefully there was video on demand. I watched Gravity, but I must say I was really desapointed. Beeing a physicist, I saw a lot of mistakes that ruinded the movie. Guys, it is worthless to program an engine that simulate physics if you gave it the wrong rules. From basic physics classes you will learn the following :

• Angular momentum conservation also apply in space.
• There is no buoyancy convection, hence no "flame" like the one on earth.
• Usually the law of physics don't change with time.

Landing in Canada, Custom were more thorough than US custom. They actually searched what PyCon was, and asked me question about it.

Going from Airport to Center Montreal was quite easy, direct by bus 747, finding the right stop was harder has the bus driver was announcing stop number, and not name of the station, and no map was availlable in the bus.

Hotel was a block away from conference center (but not that easy to find when you enter by the given street adress and not the main entrance).

So I was finally settle around 18 hours before the beginning of the tutorials, tweeted to know if anybody was around, meet with Agam, get dinner, sleep.

### Tutorials¶

Durring the first day I was able to attend two tutorials, which I both enjoyed, and allowed me to adjust to the right timezone. 1h lunch is just enough to meet with people and discuss, hopefully you get (some) time to discuss and meet people in the evening.

I catched up with Fernando who was giving the IPython tutorial the next day in the morning.

Btw, do not forget to update your phones app before leaving the country, especially the one you want to use that require textto send you a text when you are abroad and do not get roaming.

The IPython tutorial on thursday went great, except some really weird installation issues where :

• Anaconda did refuse to install because it is already installed but nowhere on the hardrive, and in the end, you force miniconda install and do all the dependency by hand.

• Of course you brought Pendrives to avoid user to download the 300Mb install on conference wifi, but you forgot the Win-Xp-32bit installer.

I was really suprised to see people still using IPython 0.12, and modal interface seem to make sens for people having never used IPython.

### 1h lunch too short, just extend it.¶

One thig that really make PyCon so great, (and the python comunity in general are the people) I basically missed many tutorial and talk during the week only because of discussing with people. I must that I learned a lot by discussing with Nick Coghlan and Fernando Pérez (ok, as a non native english I might have been really quiet, but learned a lot, and I'm gettign better, I'm starting to hear people accents).

If you want something on python core, you need to be persistant. By default, anything you will propose to the CPython core comunity/PSF will be refused, but you need to perseverate. If you perseverate then you show the community that you are ready to put time into supporting what you are proposing.

Also keep in mind that even if there are obvious issues (haem packaging) people are working on it, you probably don't know the implication :

So please either be patient, or help them with the most basic thing missing : documenting ! Has seen later in the conference the CPython/packaging/docummenting/pip that once were difficult to distinguish becomme separate spaces, where for now people tend to be the same, but things are shifting.

Becomming member of the PSF is now also open to everybody, so you can know get even more involve into Python.

### Thursday evening reception¶

Even after missing one tutorial afternoon can get busy, volunteers packed stuff in bags for the (real) oppenig of the conference the day after. It is also the time were sponsors are starting to set-up their booth and were you can talk to people before the big affluence. Usually the time also to finally see in flesh and blood people you only know by their twitter/github/... handle. Often it even takes you a day or two find out you already know each other.

Unlike other meals, there was (a limited number per person of) free beer, still found a little annoying that you had to pay for soft drink afterward. It is also the right time to look for the different competions around the different booth to win stuff. This goes from Linked-in that has this informal "do this puzzle as fast as you can", to Google were you actually had five minutes chrono to program an AI for your snake in a "Snake Arena" game.

Reception ended pretty early as teh conference had not already started, but if you plan on arriving the day before, it is a must attend if you want to start meeting with people and start adjusting to the timezone when you are flying from further west.

Still empty stomach (except for 3 beers), I met with Jess Hamrick (which describe her version of PyCon here) to eat dinner, before crashing to sleep. I won't repeat was jess has said as she is much more skillful with words than I am.

### Core of the conference¶

I really appreciated the morning during the main conference as the breakfast was served in the convention center, which allow you to socialize early in the morning.

In [15]:
from IPython.display import Image
Image('img/breakfast.jpg', width=600)

Out[15]:

As you can see there is room for plety of people !

I wont spend too much time on the keynotes and talk that were presented, they are already all uploaded on PyVideo / Youtube and I must say that the PyCon team in charge of filming/organising the events did a pretty awesome job !

In general I felt like PyCon was really different from SciPy first by the number of people that are present which is huge :

In [4]:
from IPython.display import Image
Image('img/crowd.jpg')

Out[4]:

Main conference room before keynote (stolen from twitter at some point, if you re-find the author plese tell me so that I could update the post)

Also unlike SciPy, PyCon is much less axed toward science, which both gave me the opportunity to discover lots of project and found out that in the end, IPython is not that used and known. Having been once in Texas and once in Canada, in my own experience conference food during break is alway better in Austin than in Montreal.

In evening we either went to restaurant, or to party organised by Speakers or Sponsors. Note that even if it is great to have open bar (and not cheap alchool, but nice wine) and unlimited cake, I felt like real food was missing.

### Sprints¶

When going to another continent, I usually prefer to optimise and stay as much as I can, hence I also stayed for sprint (I'll write my PhD dissertaion later). In the other hanf Fernando Perez is a Busy man and only was present for the first day and a half. Hence we were not supposed to have a real IPython sprint for the first day.

So we arrived at the sprint on monday morning off the cuff, ready to have only a few people for the sprint. To our surprise after 1h, we were around 20 around the table with our number growing. We were more and more in need for a blackboard to a globab explanation of the IPython internal. Hopefully our hacker spirit lead us to the back of a pycon sign we steal and use the back of the blackboard:

In [8]:
Image('img/fperez-pycon-sign.jpg', width=500)

Out[8]:
In [16]:
Image('img/fperez-whiteboard.JPG')

Out[16]: