Before this week, data for our programs has been stored in our computers' random-access memory, or RAM. Memory (RAM) is a fast, temporary place to store information, but is not suitable for long-term storage. As you undoubtedly experienced for the past two weeks, if you shut down your server and re-launch your application, most of your program's data is gone!
Beginning this week, we will begin persisting data with full-blown databases! We’ll be writing our test data into a postgres database that lives only in the computer’s memory, and then we’ll transition to using a production database.
We'll begin by learning about SQL, and how relational databases work. Then, we'll cover how to setup and configure our very own databases, including best practices for naming and data organization/architecture. Then, we’ll learn how to integrate databases into our Java backed apps. We'll also learn how to retrieve, store, update, and delete database entries from directly within our Spark applications. On top of that, we’ll also learn to work with Objects in Objects in our Spark apps - such as assigning
Categorys, a skill you can use to further enhance your Blog from last week
Additionally, we'll learn how to update and delete objects in our database from within an application, and how to write tests to properly assert that all database functionality is working correctly. Then, once we feel a little more comfortable, we'll begin to explore more advanced SQL queries to return very specific database entries, or types of information.
By the end of this week, you’ll be able to persist data, handle exceptions, deal with nested data and extended routing, and much more!
This week's independent project will be reviewed on the following criteria: