Update questions

This commit is contained in:
Chris Proctor
2026-04-08 10:49:23 -04:00
parent d9e981fd66
commit df3940b258

View File

@@ -2,19 +2,24 @@
Answer each of the following questions.
## Model quality
1. How well did the model you used work in this lab? Describe one problem you encountered
with the model's responses, and explain how you changed the system prompt to address it.
## Memory and models
1. How much memory is available on your system?
2. How much memory is available on your system?
2. List two models from Hugging Face or Ollama's model library that could run on your system. For each one, explain
3. List two models from Hugging Face or Ollama's model library that could run on your system. For each one, explain
what it is good at.
3. Choose an interesting-looking model that *cannot* run on your system. What is this model
4. Choose an interesting-looking model that *cannot* run on your system. What is this model
good at doing? Imagine an AI-powered app you could create with this model and describe it.
## Local vs. remote models
4. Ollama can easily be configured to use a remotely-hosted model; you just provide the URL
5. Ollama can easily be configured to use a remotely-hosted model; you just provide the URL
where the model is hosted. List some of the advantages of locally-hosted models, and some
of the advantages of remotely-hosted models. What kinds of apps would be most suitable for
locally-hosted models? What kinds of apps would be most suitable for remotely-hosted models?