Updated Use Case Specification: Web Crawler (markdown)

henningxtro 2020-10-29 10:51:36 +01:00
parent 84a070de88
commit 57594774ab

@ -1,20 +1,31 @@
# Use-Case Specification: Web Crawler # Use-Case Specification: Web Crawler
## 1. Use-Case: Web Crawler ## 1. Use-Case: Web Crawler
### 1.1 Brief Description ### 1.1 Brief Description
The web crawler is an important component of our project. In this Use-Case-Specification we specify the main task of this component: Crawling predefined webpages for current prices and saving them into a database.
## 2. Flow of Events ## 2. Flow of Events
### Activity Diagram ### Activity Diagram
![activity diagram](https://github.com/Mueller-Patrick/Betterzon/blob/master/doku/AC_Crawler.png) ![activity diagram](https://github.com/Mueller-Patrick/Betterzon/blob/master/doku/AC_Crawler.png)
At the very beginning the crawler process reads it's configuration file. If it's invalid, the process will terminate.
If not, the crawler will check if the specified Shop is already present in the database. If not, it will create the entry and continue with fetching all products from a certain category.
For every product in that list the following will be done:
- Check if the product is available on amazon.
-- If not, the product is discarded
- Check if the product is in the database
-- If not, it is added
- Add the fetched price to the price database
If all fetched products are processed, the process is terminated.
## 3. Special Requirements ## 3. Special Requirements
TBD
## 4. Preconditions ## 4. Preconditions
### 4.1 The user has to be logged in ### 4.1 The Database has to accept connections
### 4.2 A configuration file has to be in place
## 5. Postconditions ## 5. Postconditions
TBD
## 6. Function Points ## 6. Function Points
[tbd] [tbd]