As APIs form the backbone of modern software architecture, I wanted to share this comprehensive REST API cheatsheet that covers crucial implementation aspects: 1. Core Architectural Principles: - Client-Server separation ensures scalability and independent evolution - Statelessness eliminates server-side session storage - Cacheability improves performance and reduces server load - Layered System architecture enables middleware and security layers - Code on Demand provides flexibility for client-side execution - Uniform Interface standardizes client-server communication 2. HTTP Methods Demystified: GET: Retrieve data (Read) POST: Create new resources PUT: Complete resource update PATCH: Partial resource modification DELETE: Remove resources HEAD: Fetch headers only OPTIONS: Check available operations 3. Status Code Categories: 2xx: Success (200 OK, 201 Created) 3xx: Redirection (301 Moved Permanently) 4xx: Client Errors (401 Unauthorized, 404 Not Found) 5xx: Server Errors (500 Internal Server Error) 4. Security Implementation: - OAuth 2.0/JWT for robust authentication - Role-based (RBAC) authorization - TLS/SSL encryption - Input validation - Rate limiting - CORS configuration - Security headers (CSP, X-Frame-Options) 5. Resource Naming Best Practices: - Noun-based endpoints (/users, /products) - Plural resources for collections - Hyphenated compound words - Lowercase for consistency 6. Production-Ready Features: - API versioning in URLs - Query parameter filtering - Resource sorting capabilities - Pagination for large datasets - Comprehensive error handling - OpenAPI documentation - Efficient caching strategies What other critical aspects do you consider when designing REST APIs?
Best Practices for API Development
Explore top LinkedIn content from expert professionals.
-
-
How To Handle Sensitive Information in your next AI Project It's crucial to handle sensitive user information with care. Whether it's personal data, financial details, or health information, understanding how to protect and manage it is essential to maintain trust and comply with privacy regulations. Here are 5 best practices to follow: 1. Identify and Classify Sensitive Data Start by identifying the types of sensitive data your application handles, such as personally identifiable information (PII), sensitive personal information (SPI), and confidential data. Understand the specific legal requirements and privacy regulations that apply, such as GDPR or the California Consumer Privacy Act. 2. Minimize Data Exposure Only share the necessary information with AI endpoints. For PII, such as names, addresses, or social security numbers, consider redacting this information before making API calls, especially if the data could be linked to sensitive applications, like healthcare or financial services. 3. Avoid Sharing Highly Sensitive Information Never pass sensitive personal information, such as credit card numbers, passwords, or bank account details, through AI endpoints. Instead, use secure, dedicated channels for handling and processing such data to avoid unintended exposure or misuse. 4. Implement Data Anonymization When dealing with confidential information, like health conditions or legal matters, ensure that the data cannot be traced back to an individual. Anonymize the data before using it with AI services to maintain user privacy and comply with legal standards. 5. Regularly Review and Update Privacy Practices Data privacy is a dynamic field with evolving laws and best practices. To ensure continued compliance and protection of user data, regularly review your data handling processes, stay updated on relevant regulations, and adjust your practices as needed. Remember, safeguarding sensitive information is not just about compliance — it's about earning and keeping the trust of your users.
-
A Cheatsheet to Build Secure APIs An insecure API can compromise your entire application. Follow these strategies to mitigate the risk: 1 - Using HTTPS Encrypts data in transit and protects against man-in-the-middle attacks. This ensures that data hasn’t been tampered with during transmission. 2 - Rate Limiting and Throttling Rate limiting prevents DoS attacks by limiting requests from a single IP or user. The goal is to ensure fairness and prevent abuse. 3 - Validation of Inputs Defends against injection attacks and unexpected data format. Validate headers, inputs, and payload 4 - Authentication and Authorization Don’t use basic auth for authentication. Instead, use a standard authentication approach like JWTs Use a random key that is hard to guess as the JWT secret Make token expiration short For authorization, use OAuth 5 - Using Role-based Access Control RBAC simplifies access management for APIs and reduces the risk of unauthorized actions. Granular control over user permission based on roles. 6 - Monitoring Monitoring the APIs is the key to detecting issues and threats early. Use tools like Kibana, Cloudwatch, Datadog, and Slack for monitoring Don’t log sensitive data like credit card info, passwords, credentials, etc. Over to you: What else would you do to build a secure API? -- Subscribe to our weekly newsletter to get a Free System Design PDF (158 pages): https://bit.ly/bbg-social #systemdesign #coding #interviewtips .
-
AI is not failing because of bad ideas; it’s "failing" at enterprise scale because of two big gaps: 👉 Workforce Preparation 👉 Data Security for AI While I speak globally on both topics in depth, today I want to educate us on what it takes to secure data for AI—because 70–82% of AI projects pause or get cancelled at POC/MVP stage (source: #Gartner, #MIT). Why? One of the biggest reasons is a lack of readiness at the data layer. So let’s make it simple - there are 7 phases to securing data for AI—and each phase has direct business risk if ignored. 🔹 Phase 1: Data Sourcing Security - Validating the origin, ownership, and licensing rights of all ingested data. Why It Matters: You can’t build scalable AI with data you don’t own or can’t trace. 🔹 Phase 2: Data Infrastructure Security - Ensuring data warehouses, lakes, and pipelines that support your AI models are hardened and access-controlled. Why It Matters: Unsecured data environments are easy targets for bad actors making you exposed to data breaches, IP theft, and model poisoning. 🔹 Phase 3: Data In-Transit Security - Protecting data as it moves across internal or external systems, especially between cloud, APIs, and vendors. Why It Matters: Intercepted training data = compromised models. Think of it as shipping cash across town in an armored truck—or on a bicycle—your choice. 🔹 Phase 4: API Security for Foundational Models - Safeguarding the APIs you use to connect with LLMs and third-party GenAI platforms (OpenAI, Anthropic, etc.). Why It Matters: Unmonitored API calls can leak sensitive data into public models or expose internal IP. This isn’t just tech debt. It’s reputational and regulatory risk. 🔹 Phase 5: Foundational Model Protection - Defending your proprietary models and fine-tunes from external inference, theft, or malicious querying. Why It Matters: Prompt injection attacks are real. And your enterprise-trained model? It’s a business asset. You lock your office at night—do the same with your models. 🔹 Phase 6: Incident Response for AI Data Breaches - Having predefined protocols for breaches, hallucinations, or AI-generated harm—who’s notified, who investigates, how damage is mitigated. Why It Matters: AI-related incidents are happening. Legal needs response plans. Cyber needs escalation tiers. 🔹 Phase 7: CI/CD for Models (with Security Hooks) - Continuous integration and delivery pipelines for models, embedded with testing, governance, and version-control protocols. Why It Matter: Shipping models like software means risk comes faster—and so must detection. Governance must be baked into every deployment sprint. Want your AI strategy to succeed past MVP? Focus and lock down the data. #AI #DataSecurity #AILeadership #Cybersecurity #FutureOfWork #ResponsibleAI #SolRashidi #Data #Leadership
-
🚀 REST API Cheat Sheet: Best Practices and Guidelines 🚀 🔍 Designing robust APIs? Check out this cheat sheet with key best practices: - **Versioning:** Use version numbers in the URL for effective change management. - **Filtering:** Utilize query parameters to filter resources efficiently. - **Sorting:** Implement sorting using query parameters for better organization. - **Pagination:** Manage large datasets with ease using the limit and offset parameters. - **Error Handling:** Ensure meaningful error codes for clear issue understanding. - **Documentation:** Make use of tools like OpenAPI (Swagger) for comprehensive documentation. - **Caching:** Improve performance with server-side or client-side caching. 📚 Resource Naming: - **Nouns:** Opt for nouns for resource names like users and products. - **Plurals:** Use plural nouns for collections to maintain consistency. - **Hyphens:** Enhance readability by using hyphens in resource names. - **Lowercase:** Maintain consistency by using lowercase letters. 🔒 Security Measures: - **Authentication:** Implement OAuth 2.0 or JWT for secure access. - **Authorization:** Manage permissions effectively with RBAC or ABAC. - **HTTPS:** Ensure data security in transit with TLS/SSL encryption. - **Input Validation:** Prevent security vulnerabilities with thorough data validation. - **Rate Limiting:** Prevent abuse by limiting requests effectively. - **CORS:** Control access from different origins with configured CORS headers. 🔑 API Essentials: - **Status Codes:** Understand the meaning behind HTTP status codes for effective communication. - **HTTP Methods:** Explore the various methods like GET, POST, PUT, PATCH, DELETE for resource handling. - **Core Principles:** Dive into the core principles like client-server separation and statelessness for efficient API design. ℹ️ This cheat sheet is your go-to for designing efficient, secure, and user-friendly RESTful APIs. #API #BestPractices #Security #Developers
-
I’ve reviewed close to 2000+ code review requests in my career. At this point, it’s as natural to me as having a cup of coffee. However, from a senior engineer to now an engineering manager, I’ve learned a lot in between. If I had to learn to review code all over again, this would be the checklist I follow (inspired from my experience) 1. Ask clarifying questions: - What are the exact constraints or edge cases I should consider? - Are there any specific inputs or outputs to watch for? - What assumptions can I make about the data? - Should I optimize for time or space complexity? 2. Start simple: - What is the most straightforward way to approach this? - Can I explain my initial idea in one sentence? - Is this solution valid for the most common cases? - What would I improve after getting a basic version working? 3. Think out loud: - Why am I taking this approach over another? - What trade-offs am I considering as I proceed? - Does my reasoning make sense to someone unfamiliar with the problem? - Am I explaining my thought process clearly and concisely? 4. Break the problem into smaller parts: - Can I split the problem into logical steps? - What sub-problems need solving first? - Are any of these steps reusable for other parts of the solution? - How can I test each step independently? 5. Use test cases: - What edge cases should I test? - Is there a test case that might break my solution? - Have I checked against the sample inputs provided? - Can I write a test to validate the most complex scenario? 6. Handle mistakes gracefully: - What’s the root cause of this mistake? - How can I fix it without disrupting the rest of my code? - Can I explain what went wrong to the interviewer? - Did I learn something I can apply to the rest of the problem? 7. Stick to what you know: - Which language am I most confident using? - What’s the fastest way I can implement the solution with my current skills? - Are there any features of this language that simplify the problem? - Can I use familiar libraries or tools to save time? 8. Write clean, readable code: - Is my code easy to read and understand? - Did I name variables and functions meaningfully? - Does the structure reflect the logic of the solution? - Am I following best practices for indentation and formatting? 9. Ask for hints when needed: - What part of the problem am I struggling to understand? - Can the interviewer provide clarification or a nudge? - Am I overthinking this? - Does the interviewer expect a specific approach? 10. Stay calm under pressure: - What’s the first logical step I can take to move forward? - Have I taken a moment to reset my thoughts? - Am I focusing on the problem, not the time ticking away? - How can I reframe the problem to make it simpler?
-
The Cybersecurity and Infrastructure Security Agency together with the National Security Agency, the Federal Bureau of Investigation (FBI), the National Cyber Security Centre, and other international organizations, published this advisory providing recommendations for organizations in how to protect the integrity, confidentiality, and availability of the data used to train and operate #artificialintelligence. The advisory focuses on three main risk areas: 1. Data #supplychain threats: Including compromised third-party data, poisoning of datasets, and lack of provenance verification. 2. Maliciously modified data: Covering adversarial #machinelearning, statistical bias, metadata manipulation, and unauthorized duplication. 3. Data drift: The gradual degradation of model performance due to changes in real-world data inputs over time. The best practices recommended include: - Tracking data provenance and applying cryptographic controls such as digital signatures and secure hashes. - Encrypting data at rest, in transit, and during processing—especially sensitive or mission-critical information. - Implementing strict access controls and classification protocols based on data sensitivity. - Applying privacy-preserving techniques such as data masking, differential #privacy, and federated learning. - Regularly auditing datasets and metadata, conducting anomaly detection, and mitigating statistical bias. - Securely deleting obsolete data and continuously assessing #datasecurity risks. This is a helpful roadmap for any organization deploying #AI, especially those working with limited internal resources or relying on third-party data.
-
I've personally integrated with 1000 APIs (and read thousands of sets of API documentation). What are best practices for building an API that users can actually use (without re-inventing the wheel) 👇 1. Bearer token authentication Offer a self-service way for admins of accounts to create API keys (I'd call them an API Key in your app, don't create your own name) Add them as a header "Authorization": "Bearer {{ api_key }}" Over time, Client Credentials or OAuth Auth Code can make sense (Client Credentials is the best in my opinion, but Bearer tokens are great to start). 2. Use JSON requests and responses Please. Just use JSON. 3. Don't return top level arrays in your responses Starting out, it's easy to assume people want JUST their data in a response like this: [ { "id": "123" } ] But if you have a top level array, and you ever want to add pagination details, or metadata to the response, it's very much a breaking change. Instead, use a data object: { "data": [ { "id": "123" } ] } Because you can then add another top level object, or objects over time with things like pagination, error messages, etc. without breaking downstream workflows. 4. Make it a REST API, don't use GraphQL Unless the only, and primary, use case is for other people to build a UI on top of your API, don't use GraphQL. Build a REST API to start, and over time when you think about exposing a GraphQL API instead. Don't do it. 5. Build on the OpenAPI spec and use standardized documentation generators It'll simplify things dramatically for you and your clients 6. Start with limit + offset, or page + page_size for pagination Don't create your own pagination mechanics. Use the norm. If there's no data in a response, don't throw an error, instead just return an empty data array. 7. Create list endpoints for all of the core entities Customers, users, invoices, etc. should all have list endpoints for customers to paginate through the data they need. 8. Include foreign keys in responses Don't make people add this manually 9. Use an "id" object for primary keys Don't create your own names. And keep it standard across entities. __ What else would you add?
-
The EDPB recently published a report on AI Privacy Risks and Mitigations in LLMs. This is one of the most practical and detailed resources I've seen from the EDPB, with extensive guidance for developers and deployers. The report walks through privacy risks associated with LLMs across the AI lifecycle, from data collection and training to deployment and retirement, and offers practical tips for identifying, measuring, and mitigating risks. Here's a quick summary of some of the key mitigations mentioned in the report: For providers: • Fine-tune LLMs on curated, high-quality datasets and limit the scope of model outputs to relevant and up-to-date information. • Use robust anonymisation techniques and automated tools to detect and remove personal data from training data. • Apply input filters and user warnings during deployment to discourage users from entering personal data, as well as automated detection methods to flag or anonymise sensitive input data before it is processed. • Clearly inform users about how their data will be processed through privacy policies, instructions, warning or disclaimers in the user interface. • Encrypt user inputs and outputs during transmission and storage to protect data from unauthorized access. • Protect against prompt injection and jailbreaking by validating inputs, monitoring LLMs for abnormal input behaviour, and limiting the amount of text a user can input. • Apply content filtering and human review processes to flag sensitive or inappropriate outputs. • Limit data logging and provide configurable options to deployers regarding log retention. • Offer easy-to-use opt-in/opt-out options for users whose feedback data might be used for retraining. For deployers: • Enforce strong authentication to restrict access to the input interface and protect session data. • Mitigate adversarial attacks by adding a layer for input sanitization and filtering, monitoring and logging user queries to detect unusual patterns. • Work with providers to ensure they do not retain or misuse sensitive input data. • Guide users to avoid sharing unnecessary personal data through clear instructions, training and warnings. • Educate employees and end users on proper usage, including the appropriate use of outputs and phishing techniques that could trick individuals into revealing sensitive information. • Ensure employees and end users avoid overreliance on LLMs for critical or high-stakes decisions without verification, and ensure outputs are reviewed by humans before implementation or dissemination. • Securely store outputs and restrict access to authorised personnel and systems. This is a rare example where the EDPB strikes a good balance between practical safeguards and legal expectations. Link to the report included in the comments. #AIprivacy #LLMs #dataprotection #AIgovernance #EDPB #privacybydesign #GDPR
-
If I were just starting out with APIs, these are the 10 rules I’d follow. These best practices will help you create simple, clear, and consistent APIs that are easy to use and understand. 1/ Keep It Simple ↳ Use clear, concise endpoints that describe resources. ↳ Avoid over-complicating; keep naming consistent and understandable. ↳ Example: `/books` for all books, `/books/{id}` for a specific book. 2/ Use RESTful Design ↳ Use standard HTTP methods: GET, POST, PUT, DELETE. ↳ Name endpoints with nouns like `/users` or `/orders` for clarity. ↳ Example: HTTP code 200 (success), 404 (not found), 500 (server error). 3/ Choose Standard Data Formats ↳ Use JSON as it’s readable and widely supported. ↳ Keep data formats consistent across endpoints. ↳ Example: `{ "title": "To Kill a Mockingbird", "author": "Harper Lee" }`. 4/ Provide Clear Documentation ↳ Document endpoints with detailed descriptions. ↳ Provide request and response examples for easy usage. ↳ Example: Explain `/users/{id}` with request/response samples. 5/ Implement Versioning ↳ Include versioning in the URL to manage changes. ↳ Allow for updates without breaking existing clients. ↳ Example: `/v1/books` for version 1, `/v2/books` for an updated version. 6/ Ensure Security ↳ Use HTTPS for data encryption. ↳ Implement authentication and authorization mechanisms. ↳ Example: OAuth 2.0 to secure user access to APIs. 7/ Handle Errors Gracefully ↳ Use standard HTTP status codes like 400, 404, and 500. ↳ Provide informative error messages to help resolve issues. ↳ Example: `400 Bad Request` for invalid input, with a detailed error message. 8/ Optimize Performance ↳ Use caching to store frequent responses and speed up access. ↳ Apply rate limiting to control the number of requests a user can make. ↳ Example: Cache popular books, limit requests to prevent server overload. 9/ Test Thoroughly ↳ Conduct functionality, performance, and security testing. ↳ Ensure different user scenarios are tested for reliability. ↳ Example: Use automated tools for end-to-end testing before deployment. 10/ Monitor and Update ↳ Monitor API performance and user activity continuously. ↳ Update the API to address bugs or add features regularly. ↳ Example: Use Prometheus to monitor latency and health. – P.S: What would you add from your experience?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development