Rails has great capabilities for working CSV files. However, like with many things, the most obvious way is not the most efficient. We noticed this when our server had major fluctuations in memory consumption. After digging through metrics, made easy thanks to Prometheus and Grafana. We noticed that the spikes were due to our CSV uploads. Examining CSV Import Our processor is responsible for bringing in coordinates from legacy systems and ones that cannot support our API.
I recently gave a talk, at the local Fullstack Meetup, on the Phoenix Framework. Phoenix is the de facto web framework for Elixir. In this talk, I cover some of the best features of Phoenix: Erlang: The platmor which Elixir and Phoenix are build on. (albeit very brief overview of the major features) Plug: The specification for composable module design that gives Phoenix much of it’s power. Ecto: The database integration layer.
Haskell is notoriously difficult to setup, which probably led to many people being scared away from ever getting started. However, there has been a lot of work done to address these short comings. And there is a way to setup a very pleasant environment thanks to the hard work of many projects. Traditional methods included: Installing The Haskell Platform, which was a great project in it’s time but always seems to lag a few GHC versions behind.
Table of Contents Not Enough Time Missing or Changing Requirements Unrealistic Expectations The Right Tool for the Job Technological Limitations Wrap Up Engineers want to measure the quality and effectiveness of their work. They turn to code coverage, burn down charts, and yearly goals, with the best intention. These measurements provide quantitative data about how much planned work was delivered, but they fail to tie the work to outcome.