From 57594774ab542b57cafd4eb7efa3b7ee3d205b35 Mon Sep 17 00:00:00 2001 From: henningxtro Date: Thu, 29 Oct 2020 10:51:36 +0100 Subject: [PATCH] Updated Use Case Specification: Web Crawler (markdown) --- Use-Case-Specification:-Web-Crawler.md | 17 ++++++++++++++--- 1 file changed, 14 insertions(+), 3 deletions(-) diff --git a/Use-Case-Specification:-Web-Crawler.md b/Use-Case-Specification:-Web-Crawler.md index bdc435b..1c4c01b 100644 --- a/Use-Case-Specification:-Web-Crawler.md +++ b/Use-Case-Specification:-Web-Crawler.md @@ -1,20 +1,31 @@ # Use-Case Specification: Web Crawler ## 1. Use-Case: Web Crawler ### 1.1 Brief Description - +The web crawler is an important component of our project. In this Use-Case-Specification we specify the main task of this component: Crawling predefined webpages for current prices and saving them into a database. ## 2. Flow of Events ### Activity Diagram ![activity diagram](https://github.com/Mueller-Patrick/Betterzon/blob/master/doku/AC_Crawler.png) +At the very beginning the crawler process reads it's configuration file. If it's invalid, the process will terminate. +If not, the crawler will check if the specified Shop is already present in the database. If not, it will create the entry and continue with fetching all products from a certain category. +For every product in that list the following will be done: +- Check if the product is available on amazon. +-- If not, the product is discarded +- Check if the product is in the database +-- If not, it is added +- Add the fetched price to the price database +If all fetched products are processed, the process is terminated. ## 3. Special Requirements +TBD ## 4. Preconditions -### 4.1 The user has to be logged in - +### 4.1 The Database has to accept connections +### 4.2 A configuration file has to be in place ## 5. Postconditions +TBD ## 6. Function Points [tbd] \ No newline at end of file