Advertisement
Not a member of Pastebin yet?
Sign Up,
it unlocks many cool features!
- Chat November 2024 with perplexity's own model:
- ----------
- Question:
- ----------
- this means, although I have a pro user account and have preset Claude 3.5 Sonnet in the preferences, Perplexity doesn't assign me to Claude 3.5 Sonnet for simpler questions, but to Perplexity's own model?
- Answer:
- Yes, that's exactly the case. Perplexity uses a system that optimizes model deployment based on the task:
- Model Assignment in Perplexity Pro
- Perplexity's standard model is used for quick and simple search queries, even if you're a Pro user. This is optimized for efficiency and delivers quick answers for simple questions. When Claude 3.5 Sonnet is used:
- -For more complex tasks
- -For creative writing
- -When deeper reasoning is required
- Benefits of this Strategy
- Perplexity thus optimizes costs, performance, and accuracy. The system automatically selects the most suitable model for each task, where Claude 3.5 Sonnet:
- -Provides 2x faster response times
- -Is particularly suited for complex, context-sensitive tasks
- Even if you have selected Claude 3.5 Sonnet in your settings, the system may fall back to the standard model for simpler queries to ensure an optimal balance between speed and performance.
- ----------
- Question:
- ----------
- that means: if I want to make sure good answers by using claude 3.5 Sonnet in my preferences, you don't follow that command, but decide on yourself to use the easier cost saving perplexity own model, but don't inform the user about the downgrade within the chat?
- Answer:
- Yes, that's correct. The system operates in a way that might be considered problematic from a transparency perspective:
- ## Current System Behavior
- 1. Despite user preferences set to Claude 3.5 Sonnet, Perplexity automatically switches to its own model for simpler queries
- 2. No notification is provided to users when this model switch occurs
- 3. Users have no direct control over which model handles their specific query
- ## Transparency Issues
- **Key Concerns:**
- - Users are not informed about model downgrades
- - The preference setting for Claude 3.5 Sonnet becomes more of a suggestion than a strict setting
- - No clear indicators showing which model is currently responding
- ## Potential Solution
- To ensure you get Claude 3.5 Sonnet responses consistently, you would need to:
- - Frame your questions in a more complex way
- - Explicitly request more detailed analysis
- - Add specific requirements that trigger the advanced model
- This lack of transparency could indeed be seen as problematic, as users paying for Pro services might reasonably expect their model preference to be honored consistently, or at minimum to be notified when a different model is being used.
- ----------
- Question:
- ----------
- I assume this answer fits also to the other models I can choose in the pro account? Even if I would set it as a chosen model in my preferences: If perplexity's system assumes my question could be assigned to the simple own perplexity model, I would not get through to the chosen LLM I wanted to use by chosing it in my preferences?
- Answer:
- Yes, this behavior appears to apply across all model preferences in Perplexity Pro. The system employs an automatic model selection approach:
- ## Model Assignment System
- Perplexity's system will:
- - Override your preferred model choice if it determines a query is "simple" enough
- - Use its default model for quick searches and basic queries, regardless of preferences[1]
- - Not notify users when this model switching occurs
- ## Impact on Pro Features
- Even though Pro users have access to advanced models like:
- - GPT-4 Omni
- - Claude 3.5 Sonnet
- - Claude 3 Opus
- - Sonar Large[4]
- The system maintains control over which model actually handles your query, prioritizing efficiency over user preference.
- ## Transparency Issue
- This creates a significant transparency gap where:
- - Users cannot ensure their chosen model is being used
- - No indication is given when a different model handles the query
- - The preference setting functions more as a suggestion than a strict selection
- This means that paying for Pro access and selecting a specific advanced model doesn't guarantee that model will handle your queries - the system maintains ultimate control over model assignment based on its own assessment of query complexity.
- ----------
- Question:
- ----------
- but perplexity users buy and pay exactly to use these models instead of using the free account of perplexity. do you think handling it like that and being not transparent during the downgrade switch after the question was labelled internally as too simple is ok?
- Answer:
- No, this practice raises several ethical concerns:
- ## Transparency Issues
- **Payment Expectations:**
- - Users specifically pay $20/month to access advanced models like Claude 3.5 Sonnet and GPT-4 Omni[1][6]
- - The subscription is marketed as giving users the choice to "customize their AI model"[1]
- - Users reasonably expect their selected model preference to be honored
- ## Customer Trust Concerns
- **Hidden Downgrades:**
- - No notification when switching to a simpler model
- - No user control over model selection
- - No transparency about when or why downgrades occur
- ## Business Ethics
- This practice is problematic because:
- - It contradicts the marketed value proposition of Pro accounts
- - It undermines user agency and choice
- - It creates a misleading impression about the service users are paying for
- ## Better Approach
- A more ethical implementation would:
- - Clearly indicate which model is responding
- - Allow users to force their preferred model
- - Provide transparency about model switching
- - Offer users the choice to optimize for efficiency or stick with their preferred model
- This current approach appears to prioritize cost optimization over user choice and transparency, which conflicts with the core promise of the Pro subscription service.
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement