Mathilde Luce-Lucas

Web developer



ruby on rails, web design, UX, business model

A search engine which goal is to index all e-commerce products

As part of the ESGI Master course, we had to build a website in order to assert the skills we had acquired during our Ruby on Rails class. The topic was left to the choice of students, as long as our platform integrated the following features : databases, CRUD management (Create-Read-Update-Delete), entities linked together (belongs_to, has_many...) and had the principal components of a dynamic applications.


Shoparama was a project first imagined for another class. Its primary concept was to build a search engine dedicated to e-commerce websites, in the same vein as Google Shopping but without their sponsoring and advertisement business model. To do so, we imagined using products’ meta-data on every possible e-commerce websites we could scrap, using Semantic Web technologies to build entities and products description and therefore avoid creating repetition — such entities would be the only ones stored in our system, rather than products details themselves: we would always redirect the users toward the product’s page on the original website and would never manage the conversion path ourselves.

As any search engine, we let the user search for any keyword they’d like. Our system then transfers the demand and queries various websites APIs. It is possible for users to apply filters to the results we present them. If logged in, our user would have the possibility to “save” the product of their choosing to their account — then and only then would the product be stocked in our database and its semantic specificities added to our model.

Web scrapping

To query e-commerce websites and index their products, we quickly found ourselves limited to two main competitors: Ebay and Amazon. We used gems to query their APIs and use their products information to build our models — Rebay and Vacuum for Ebay and Amazon respectively.

Said gems send back JSON files when we queries them with keywords. Our program therefore parses them a first time to save only information we have an interest in and trim useless pieces. We merge all these results and then proceed to parse them a second time to remove possible duplicates. We look the products up in our database to know if any of such products were already saved in our system, and compare information to know if we need to update anything. We then display results to the user, ordered by pertinence to their query. If the user so choose to add a product to its favorites and that the product isn’t marked as already saved by us, then it’ll be added to our database.


  • ESGI
  • Master of Web Engineering
  • 242 Rue du Faubourg Saint-Antoine, 75012 Paris (FR)
made with by