26 lines
1.0 KiB
Markdown
26 lines
1.0 KiB
Markdown
# Questions
|
|
|
|
Answer each of the following questions.
|
|
|
|
## Model quality
|
|
|
|
1. How well did the model you used work in this lab? Describe one problem you encountered
|
|
with the model's responses, and explain how you changed the system prompt to address it.
|
|
|
|
## Memory and models
|
|
|
|
2. How much memory is available on your system?
|
|
|
|
3. List two models from Hugging Face or Ollama's model library that could run on your system. For each one, explain
|
|
what it is good at.
|
|
|
|
4. Choose an interesting-looking model that *cannot* run on your system. What is this model
|
|
good at doing? Imagine an AI-powered app you could create with this model and describe it.
|
|
|
|
## Local vs. remote models
|
|
|
|
5. Ollama can easily be configured to use a remotely-hosted model; you just provide the URL
|
|
where the model is hosted. List some of the advantages of locally-hosted models, and some
|
|
of the advantages of remotely-hosted models. What kinds of apps would be most suitable for
|
|
locally-hosted models? What kinds of apps would be most suitable for remotely-hosted models?
|