Thoughts

This is a mirror of my Telegram channel, where I share some of my findings and thoughts on technology.
https://www.recursive.design/ I really love Recursive, this variable font. As a UI font, my favorite feature is that, under the fixed font-family (sans/mono), adjusting the font weight does not affect the font width. When doing front-end development, if we adjust the font-weight of the corresponding element when we hover, it often causes jitter, and we need to use shadow to replace font-weight to achieve the same effect. If we use Recursive, we can elegantly avoid this problem. Because it supports the monospace variant, I am currently also using Recursive as the font for my code editor, which looks very good.
July 18, 2023
https://blog.cloudflare.com/cloudflare-snippets-alpha/ Interesting, Cloudflare's Snippets, can be placed in front of the API to perform some lightweight business logic, somewhat similar to an API Gateway. However, it's strange why they didn't directly call it middleware or interceptor, which would be more in line with common sense.
June 26, 2023
Suddenly, I realized that using an Interface Definition Language (IDL) like Protocol Buffers (protobuf) or Thrift often allows for the modification of field names, as long as the field types and numbers are not changed, ensuring the consistency of the data structure and memory structure, and thus avoiding conflicts between the client and the server. However, if you use a framework like grpc-gateway or Connect, based on protobuf, to interact with the frontend via JSON rather than binary data, it's a different story. In this case, you can freely change the numbers, but you absolutely cannot change the field names. If you want to be compatible with both JSON and binary data (native grpc) transmission, then it means that a field on this IDL cannot be changed. Essentially, this is due to the two modes using different field indexing methods.
June 17, 2023
Let's take a look at various analytics SaaS. Apart from Google Analytics, almost all similar services are not exactly cheap. Services like Vercel/Cloudflare cost about $20 a month. If it's purely for website data statistics, it doesn't seem to be of great significance, especially when the traffic is not high. The cost of setting up something like umami is less than $2 a month. For me, a bigger pain point is actually the observability of the server side. It's quite troublesome to maintain a set of prometheus/Influxdb + grafana. So I plan to find an Analytics SaaS that natively supports OpenTelemetry, and I will directly connect the data from the front end and back end through OpenTelemetry. The most ideal state is to be able to integrate log/metrics/trace/alert all into one platform. After looking around, either the price is very high, or the functions are not complete, or it is not compatible with OpenTelemetry. Thinking about it, once the data dimensions of this type of data statistics increase, it is easy to see a magnification phenomenon where the traffic of statistical data is several times that of business traffic. I also maintain a statistical service at work, which is a cost monster of the entire system. I can understand why various data statistics SaaS are so expensive.
June 9, 2023
Recently, I upgraded my self-hosted umami from 1.x to 2.x. According to the documentation, I successfully upgraded the table structure and data in postgres, but after the 2.x instance was up, it could not read the data in the database. Looking at the logs, it seems that prisma had an error processing the data. Helplessly, I tried to clear the old data, the original error was gone, but the data reported on the web page was not correctly consumed and written. Tired, and too lazy to roll back to 1.x, I don't really want to continue using umami.
June 9, 2023
In the morning, I tried to access the company's intranet at home using Surge as a springboard, but found that I couldn't access it in rule mode. I found that all requests directly used the fallback rule of Final, and accessed the company's intranet through the airport, which is definitely not accessible. Even DNS can't succeed (need to use intranet DNS). After studying for a long time, I found that it was because I added an IP-CIDR rule before the intranet rule. The overall rules are as follows: IP-CIDR,10.0.0.0/8,PROXY DOMAIN-SUFFIX,company.org,COMPANY FINAL,PROXY,dns-failed According to the Surge documentation: When making rule judgments, Surge tries to match each rule from top to bottom. If it encounters an IP type rule, Surge will perform DNS resolution before matching. 1. When dealing with intranet requests, I first encountered the IP-CIDR rule and started DNS query for the domain name on the spot, but the intranet DNS locally obviously couldn't access, and the final resolution failed. 2. According to the dns-failed rule, the FINAL's PROXY strategy was used, and the airport was chosen as the fallback. Finally, after moving the IP-CIDR rule to the bottom, the problem was solved. Honestly, this setting is quite tricky...
June 7, 2023
Macs (especially those with Intel processors) often encounter a situation where the kernal_task maxes out the CPU after waking up. It's speculated that this is due to the system having to read data cached on the disk back into memory, which can cause significant lag. https://discussions.apple.com/thread/5497235 I've tried various solutions, but none have had a noticeable effect. Now I've found a pretty good method, which is to open a terminal and execute caffeinate -s. As long as I don't terminate this process, the system will not go into sleep mode at all, thus avoiding the wake-up issue.
June 2, 2023
Eventually, I returned to simplicity, opening a repository on Github to store various configuration files, and manually creating symbolic links with ln -s after cloning to the local. For these types of configuration files that don't need to be changed every day, there is indeed no need to use real-time synchronization with cloud drives like icloud. Here I share the rewritten .zshrc, which I use in my multiple macos/linux environments. It is automatically synchronized to Gist from my configuration repository using Github Action, https://gist.github.com/sorcererxw/238f7068c18ba148337f32f9a08d0dbd
May 24, 2023
I spent all night messing with jetbrains gateway last night, thinking about using goland for remote development. The overall experience is terrible. Various functions and plugins need to distinguish between Host and Client, some plugins need to be installed on both sides to be used, and the mental burden is high; many plugins are designed for client mode, and they can't be used at all in this front-end and back-end separation mode. As for performance, it's acceptable. Opening a golang monorepo on an 8c16g development machine, the CPU will be maxed out when indexing, and it will use about 2c8g during off-peak periods. All in all, I'd rather use neovim for remote development, at least the experience is native.
May 24, 2023
I tinkered with the newly added Ponte networking capability of Surge5 over the weekend. Compared to Tailscale, which I have been using, it has a clear advantage in that it does not require setting up a DERP relay server on your own, and can directly use the airport as a relay server. I used the Hong Kong node as a relay, and the latency when accessing my home Mac from the office is consistently below 100ms. Because airports are often unrestricted in speed, using VNC for remote desktop does not require compressing the image quality. Access to home network services from outside can be achieved through some simple rules: HOME = select, DIRECT, DEVICE:mymac IP-CIDR,192.168.50.0/24,HOME Conversely, you can also directly access the company's internal network at home through the company's Mac as a springboard, which is very convenient.
May 8, 2023
I previously used mackup to back up various configuration files on my Mac to iCloud. Today, I found that the mackup directory in iCloud was mistakenly deleted, and I couldn't find it even with file recovery on iCloud... Then iCloud silently synced files and deleted the local version as well. All kinds of configurations were soft-linked to the iCloud directory, resulting in the loss of all local configurations. Now, looking at the only one still open zsh that loaded the previous .zshrc, I'm hesitating whether to give up recovery and reconfigure a .zshrc. Years of accumulated configurations are all gone 😩.
April 28, 2023
The connect server now supports HTTP GET. You just need to set the idempotency level to no side effects, and it will automatically configure the GET router for the corresponding method. service ElizaService { rpc Say(stream SayRequest) returns (SayResponse) { option idempotency_level = NO_SIDE_EFFECTS; } } idempotency_level is a built-in method attribute in protobuf, divided into unknown/idempotent/no_side_effects, the latter two both mean idempotent. In the RPC scenario, as long as it is idempotent, it means that the client can safely retry the RPC call. Whether there are side effects is not strongly instructive. However, when it comes to the http restful interface, GET often equals no side effects, so it is quite reasonable and clever to judge whether to support HTTP GET based on this. https://github.com/bufbuild/connect-go/releases/tag/v1.7.0
April 22, 2023
https://www.bitestring.com/posts/2023-03-19-web-fingerprinting-is-worse-than-I-thought.html TIL, clearing website data or using incognito mode cannot prevent browser fingerprinting.
March 22, 2023
https://mastodon.social/@simevidas/109919980697679274 Although it meets expectations, it's still very funny.
February 25, 2023
I recently tried out railway.app, deployed some services, and here are some of my thoughts: - Similar to Heroku, service instances are not destroyed, so there are no cold start issues. - Billing is based on CPU and memory usage + duration. If the monthly consumption is less than $5, no charges are applied. - If you do not enable the paid plan (link a credit card), there is also a $5 allowance per month, but the total online duration of instances is limited to 500 hours (21 days) per month. In other words, if you want to deploy services for a long term, you must enable the paid plan. - You can deploy databases (PG, MySQL, Redis, MongoDB), and also support accessing and operating the database from the front end. However, I haven't seen any database expansion, backup functions, etc. It seems to simply provide a database container instance, which is not suitable for production. - Railway has the concept of Project, where a Project can have multiple environments, deploy multiple services/databases, much like K8S's Namespace, suitable for business isolation. - It automatically recognizes the Dockerfile in the project to build and deploy images, and the build speed is relatively fast. - Like Vercel, it does not support VPC, which means if you make service-to-service calls, services can only access each other through public domain names, which is relatively insecure. However, this feature is already in the WIP state in the roadmap, so it's promising. - It does not retain historical logs, and you need to rely on external log services like logtail for log storage. - It does not support cronjob, and the solution given in the documentation is to start an instance for scheduling... Overall, for individual developers, the Railway experience is okay. But after all, it's a startup company that's only two years old, and the platform features and documentation are not yet fully developed.
A few days ago, when I was checking my credit card bill, I was surprised to find that the MongoDB Atlas managed database had deducted nearly $200 last month. It might be because I deployed some cronjobs last month, which used a large amount of database. However, it's unacceptable that a few small services generate such high costs. I thought I couldn't afford MongoDB Atlas anymore. But I couldn't find a substitute for MongoDB hosting for the time being, so I tried to start a MongoDB hosting instance on Railway. I dumped the old data over and restarted all related services, and the migration was completed in about ten minutes. I just took a look at the Usage statistics of Railway. Based on the average CPU/MEM usage per minute, I estimated the monthly cost, which would not exceed $5 🫠 (much more cost-effective than charging based on the number of reads and writes). However, as mentioned before, Railway does not have a strong guarantee for the availability of the database, nor does it have an explicit data backup solution. I plan to write a script with Github Action to dump the database to S3 on a schedule, and use it temporarily.
January 31, 2023
The Wikipedia enhancement plugin Wikiwand that has been on hiatus for several years has recently updated to 2.0version. In addition to a more modern UI design, it also adds a TLDR mode based on GPT-3, assuming the reader is a child to let AI summarize the entry. After using it for a few weeks, browsing Wikipedia has become a pleasure.
January 31, 2023
I recently tried out railway.app, deployed some services, and here are some of my thoughts: - Similar to Heroku, service instances are not destroyed, so there are no cold start issues. - Billing is based on CPU and memory usage + duration. If the monthly consumption is less than $5, no charges are applied. - If you do not enable the paid plan (link a credit card), there is also a $5 allowance per month, but the total online duration of instances is limited to 500 hours (21 days) per month. In other words, if you want to deploy services for a long term, you must enable the paid plan. - You can deploy databases (PG, MySQL, Redis, MongoDB), and also support accessing and operating the database from the front end. However, I haven't seen any database expansion, backup functions, etc. It seems to simply provide a database container instance, which is not suitable for production. - Railway has the concept of Project, where a Project can have multiple environments, deploy multiple services/databases, much like K8S's Namespace, suitable for business isolation. - It automatically recognizes the Dockerfile in the project to build and deploy images, and the build speed is relatively fast. - Like Vercel, it does not support VPC, which means if you make service-to-service calls, services can only access each other through public domain names, which is relatively insecure. However, this feature is already in the WIP state in the roadmap, so it's promising. - It does not retain historical logs, and you need to rely on external log services like logtail for log storage. - It does not support cronjob, and the solution given in the documentation is to start an instance for scheduling... Overall, for individual developers, the Railway experience is okay. But after all, it's a startup company that's only two years old, and the platform features and documentation are not yet fully developed.
November 22, 2022
In the latest Next.js13 appdir documentation, Vercel introduces the Comments component. You only need to log in to Vercel to comment anywhere on the page. It's very similar to Figma's comment system, except that the canvas is replaced with a webpage. Vercel also developed this capability for Vercel users. All Preview Deployments (non-main branches) can be directly enabled. By default, only members of the organization to which the current Deployment belongs can comment, but permissions can be opened to any user. Once a comment is made, Vercel will sync the comment to the corresponding branch's PR, which can naturally integrate into the development process. Wouldn't it be interesting to create a Disqus service in this mode?
November 9, 2022
https://github.com/topgrade-rs/topgrade A single command updates packages in various package managers, including homebrew/zsh plugins/pip/npm/docker images, etc. Not updating will make Star Wars fans feel comfortable.
November 8, 2022
https://vercel.com/analytics Vercel has acquired Splitbee, integrating visitor analysis into its own Analytics. There's no need to specifically introduce Google Analytics for personal projects anymore. I've always wanted to try Splitbee, I like its product tone, now it's more convenient.
October 26, 2022
https://mp.weixin.qq.com/s/W_9XYkF5Y_-M-aLdB-b_Ng
October 9, 2022
https://makeavideo.studio/ Meta's AI video generation is too cool!
September 30, 2022
I came across this issue unintentionally. When we use Notion as a CMS to build a website, we can't avoid the problem that Notion's S3 image links will expire. Because the images will expire, the statically generated web pages often can't load the images. Even now, if you go to the Railway blog, you can still find that some images can't be loaded. The simplest way to avoid this problem is not to host the images on Notion, but to use external links to insert into Notion, which is a bit troublesome as it requires uploading to the image hosting first. The issue poster chose to give up SSG and use SSR rendering, which can ensure that the image links sent each time are the latest. This will definitely cause the web page to load slowly (you can't use CDN, if you use CDN, you can't ensure the page is up-to-date). My homepage is also built with Notion, but I have done a lot of pre-rendering work on the server side, so SSR is definitely unacceptable. My current solution is to use a cronjob to pull the website's sitemap, use Next.js's revalidate feature to periodically regenerate each page, and ensure that the newly rendered pages are cached by Vercel's CDN. It's a very crude solution, but it seems to work well so far, and the front-end loading speed is very fast. However, I have another unverified method: Since we host images directly on Notion for convenience, we can regularly scan the page through the Notion API, download the images hosted on Notion, upload them to the image hosting, and replace the original images. Although it seems to solve the problem at the source, it doesn't seem to be very convenient.
Recently, I implemented a more elegant solution, which is to point the src of all notion blocks (img/video/...) on the web page to /api/resource/{block-id}. Each time this interface is requested, it will request the Notion API to get the latest S3 link, and finally return the client HTTP 307 redirect to the resource address, using Cache-Control to ensure that it does not need to be reacquired for a period of time. In this way, every time the browser can jump to the unexpired resource link, there is no need for any external cronjob to actively trigger the refresh.
September 30, 2022
https://vercel.com/changelog/improved-monorepo-support-with-increased-projects-per-repository Vercel has increased the number of projects that a single repo can bind from 10 to 60 for pro users, which can now support a medium-sized monorepo.
September 30, 2022
Cloudflare Announces Message Queue Beta Test Starting from Worker, Cloudflare has been continuously developing the necessary infrastructure to help developers build large, reliable applications. When you encounter situations where you don't need immediate results but need to control concurrency, it's almost time for the message queue to come into play. Limitations Number of queues: 10 per account Message size: ≀128KB Number of message retries: ≀100 times Batch submission of messages: ≀100 Batch processing wait time: ≀30 seconds Message throughput: ≀100 per second *Cloudflare Queues integrates with Cloudflare Workers, and currently, sending and receiving messages must use Worker, other API interfaces will be supported in the future. *Some limitations may be adjusted, relaxed, or cancelled in subsequent tests. Pricing Charged by the total number of operations per month; each write, read, or delete of 64 KB of data is counted as one operation, with no bandwidth traffic fees. The first million operations per month are free, and each subsequent million operations cost $0.4. A complete message transfer requires 3 operations: 1 write, 1 read, and 1 delete. The monthly bill estimate formula is: ( Total number of messages - 1000000 ) * 3 / 1000000 * $0.4 *Free trial during the beta test period >>Apply for Beta Test πŸ—‚Documentation Cloudflare Queues Cloudflare Queues: globally distributed queues without the egress fees ##Message Queue Via @Cloudflare_CN
September 29, 2022
https://medium.com/vanguards-of-code/lodash-is-dead-long-live-radash-d9d52abf428b radash - A utility function library replacing lodash It seems to have better and more concise code quality than lodash. Since it is written in TypeScript, the type checking is also more complete compared to lodash. In addition, some utility functions based on async-await have been added. It feels good. https://github.com/rayepps/radash
August 26, 2022
https://hnpredictions.github.io/ This site has crawled all the prediction posts on Hacker News. It's a kind of "automatic grave digging" in a sense. Thanks to the high-quality user group of HN, it's quite interesting to see the predictions about technology, politics, and economy made by users a few years ago.
August 15, 2022
https://deephaven.io/blog/2022/08/08/AI-generated-blog-thumbnails/ Quite interesting, using DALLΒ·E to generate images as article illustrations. In the past, inserting an image involved using keywords to search for it in a search engine, and then pairing the image with an alt. Now, it's about directly pairing an alt with an image, and ensuring that the image is unique and consistent in style, which is indeed revolutionary.
August 13, 2022
https://www.jetbrains.com/idea/whatsnew/ The JetBrains suite 2022.2 has been released, with an update to Goland. The new features are not particularly impressive, and the biggest update this time should be the switch of the runtime from jre11 to jre17. Thanks to the use of the macOS Metal API, it does feel a bit smoother.
July 29, 2022
Recently, I noticed a sudden increase in server usage, and after a long investigation, I found that the Redis cache logic was not executed at all. I checked Redis and was surprised to find that Redis was empty. I tried to set a value that would not expire, but it disappeared after a while. I suspect that this is because Redis did not set a password, was directly scanned on the public network, and then the flushall command was executed. Since Redis only contains some computational caches, which are not very important, so I didn't set up authentication. It seems that no matter what, I should not forget to set a password.
July 18, 2022
I came across this issue unintentionally. When we use Notion as a CMS to build a website, we can't avoid the problem that Notion's S3 image links will expire. Because the images will expire, the statically generated web pages often can't load the images. Even now, if you go to the Railway blog, you can still find that some images can't be loaded. The simplest way to avoid this problem is not to host the images on Notion, but to use external links to insert into Notion, which is a bit troublesome as it requires uploading to the image hosting first. The issue poster chose to give up SSG and use SSR rendering, which can ensure that the image links sent each time are the latest. This will definitely cause the web page to load slowly (you can't use CDN, if you use CDN, you can't ensure the page is up-to-date). My homepage is also built with Notion, but I have done a lot of pre-rendering work on the server side, so SSR is definitely unacceptable. My current solution is to use a cronjob to pull the website's sitemap, use Next.js's revalidate feature to periodically regenerate each page, and ensure that the newly rendered pages are cached by Vercel's CDN. It's a very crude solution, but it seems to work well so far, and the front-end loading speed is very fast. However, I have another unverified method: Since we host images directly on Notion for convenience, we can regularly scan the page through the Notion API, download the images hosted on Notion, upload them to the image hosting, and replace the original images. Although it seems to solve the problem at the source, it doesn't seem to be very convenient.
July 16, 2022
Some of my services need redis, so I directly used upstash. At first, I thought the daily 10k request quota was more than enough, but I didn't expect it to exceed the limit in minutes once I started using it. In the end, I obediently ran a redis container on my own server, just for caching. It's a bit less stable, but if the data is lost, so be it.
June 27, 2022
I have once again annually refactored my personal website built on Notion, switching from using Notion's private frontend interface to its open interface, to avoid as much as possible the occurrence of Breaking Changes. The frequent changes in Notion's private interface often require me to adapt again, which is very annoying. In addition, many Block type fields in the Notion interface are incomplete or do not meet the needs of customized frontend rendering, such as: - Some types of Blocks require secondary queries of child nodes - Code blocks need asynchronous rendering highlighting - Multimedia files require additional authentication - Need to asynchronously generate LQIP - Bookmark type does not contain opengraph information - And so on... I gave up using the data structure provided by the Notion SDK directly on the frontend, and instead used Protobuf to customize a set of data structures, and built a BFF service to aggregate data, completing all the work that needs to be done asynchronously at once on the server side. In this way, only need to fetch data during SSG, without additional calculations on the end, can render the page, whether for SEO or performance can bring improvement. (I am the one who loves to optimize in advance)
June 26, 2022
https://developer.chrome.com/blog/auto-dark-theme/ Turns out Chrome has already built-in an automatic night mode, which is expected to be available to general users in the near future. I tested it, and the overall effect is good, just a bit inferior to Dark Reader. This change is likely to free many designers and developers from the burden of maintaining Dark Mode, only needing to customize dark color schemes for some elements, and leave the rest to the algorithm. For the front-end pages of my personal projects, I also don't plan to spend effort on customizing Palette for Dark Mode anymore (Programming for the future 😁).
June 20, 2022
When using Encore, Google login gave me a prompt telling me that the current email is being used by another account, and I can choose to merge or open another account. It reminded me that I had registered with Github for Encore a year ago. In my opinion, the experience is very good. Every time I log in to a service, if the website provides multiple third-party login methods, such as Google / Github / Twitter, etc., I often get very confused, forgetting which method I used before, and worry that choosing the wrong one will create a useless account. Encore's approach avoids this situation, but it doesn't stop you from opening another account. Technically, the implementation is not difficult to guess: store the user's open id (unique index) on different platforms and the corresponding email (non-unique index) in the user table, and retrieve and compare the user's email when registering. When I integrate third-party login in my personal projects, in order to simplify permissions (in the third-party login process, obtaining user email often requires additional permissions) and database table design, I often only store the open id, but when expanding other login methods in the future, I lose the possibility of account aggregation.
June 17, 2022
https://github.blog/2022-06-14-accelerating-github-theme-creation-with-color-tooling/ Github has released a color system design tool called Primer Prism, which allows for bulk adjustments of HSL on the default color palette to change the entire color scheme. Quite interesting.
June 17, 2022
I usually use the command line tool jless (similar to jq, but can implement folding) to preview JSON. curl https://example.com/demo.json | jless A common scenario is to copy curl from Chrome DevTool and call it for preview, so it's not convenient to use jsonhero directly at this time. So I wrote a command line tool similar to jq https://github.com/sorcererxw/jsonhero to implement outputting JSON to jsonhero for viewing. go install github.com/sorcererxw/jsonhero@latest curl https://example.com/demo.json | jsonhero
June 9, 2022
I discovered a gem on Product Hunt, jsonhero, which allows you to view JSON content in a structured way, and even customizes views for some special format strings: ⁃ Preview URL content (multimedia can be played directly, JSON can be further explored) ⁃ Time fields (ISO8601) can display calendars ⁃ RGB Hex can preview colors ⁃ Embedded JSON strings can also be previewed in a structured way (this is the most practical!) They have also just launched a Chrome plugin. On the JSON data page (the content-type of the current page's URL is application/json), clicking on the plugin will take you to jsonhero to view.
June 9, 2022
https://buf.build/blog/connect-a-better-grpc The Buf team, a Protobuf management tool, has released an RPC suite called Connect. They listed some problems with gRPC: ⁃ Too complex, hard to debug, and the large codebase is prone to vulnerabilities ⁃ Does not use the net/http standard library ⁃ Does not support browsers Connect is an optimization of gRPC, with some features: ⁃ Simplified code, including more readable generated code ⁃ Uses the net/http standard library, better compatibility ⁃ Supports three protocols: gRPC/gRPC-Web/Connect ⁃ Only supports HTTP POST method, supports both HTTP/1.1 and HTTP/2, and both pb and json data formats ⁃ Supports the full gRPC protocol, including server reflection and health checks. Compared to Twitch's twirp, Connect is still compatible with the gRPC protocol, while twirp is more like a JSON-RPC based on the Protobuf generator. It seems that Connect is indeed "A better gRPC", which can cater to high-performance scenarios and also do fallback for restricted environments (browsers/debugging).
June 4, 2022
The concept of Bionic Reading has gained popularity again recently, presumably because a team shared their Bionic Reading Chrome Extension named Jiffy Reader on HN. https://news.ycombinator.com/item?id=31475420 My first encounter with Bionic Reading was on Reeder. I personally think the principle is roughly as follows: bold and darken the first few letters of each word in proportion, allowing the eyes to quickly locate between words, improving efficiency and avoiding distraction. It indeed works well for reading English articles. However, Jiffy Reader seems to have not been released yet. I found another Bionic Reading plugin in the Chrome store, and the experience is still good. You can customize the weight and proportion of highlighted letters. Recommended. https://chrome.google.com/webstore/detail/bionic-reading-digest-pas/lbmambbnglofgbcaphmokiadbfdicddj/related
May 24, 2022
Just saw this on HN https://indigostack.app/ One-click local development environment setup, including reverse proxy configuration SSL/database, etc., can indeed save a lot of work. The interface displays all components as a host on a rack, which is quite interesting. I've tested it and found that the software package is very large, about 1.6GB, I guess all dependent software is packaged in. Also, since it has not been officially released, there are still bugs, it cannot be used for actual production.
May 24, 2022
#golang From what I've seen, the Go backend of ByteDance is a thorough polyrepo, where one repository is only one service, and the common logic of multiple services is broken down into various packages for reuse. In my opinion, it's not as convenient and efficient as monorepo, but I can't deny that polyrepo can simplify the work of permission control, CI, and other infrastructure. There is no absolute correct choice, it depends on different organizational structures. If you choose polyrepo, in daily development, there often arises a situation where one requirement needs to change multiple repos. If there are go mod dependencies, you need to first push the dependencies to the repository, and then pull the new version in the caller repository through git commit hash. It can be considered quite troublesome. Fortunately, Go 1.18 finally added the workspace feature, which allows you to directly point a package to another directory through go.work locally. The code changes on one side can be called immediately on the other side, greatly reducing the trouble of cross-repository development. . └── gitlab/ β”œβ”€β”€ biz_1/ β”‚ β”œβ”€β”€ svc_1/ β”‚ β”‚ └── go.mod β”‚ └── svc_2/ β”‚ └── go.mod β”œβ”€β”€ biz_2/ β”‚ └── svc_3/ β”‚ └── go.mod β”œβ”€β”€ common/ β”‚ β”œβ”€β”€ pkg_1/ β”‚ β”‚ └── go.mod β”‚ └── pkg_2/ β”‚ └── go.mod └── go.work In this way, you only need to open the entire code root directory with IDE and configure go.work, and you can immediately get a development experience almost the same as monorepo!
May 4, 2022
https://danpetrov.xyz/programming/2021/12/30/telegram-google-translate.html This article is quite interesting, it talks about Telegram introducing a translation feature based on Google Translate, but using a private API to take advantage of Google. This API is used to implement text translation for the Chrome browser, provided free of charge to users, and naturally can be accessed anonymously. The caller only needs to simulate a request in various ways, deceiving Google into thinking that this is a request from an ordinary user.
April 23, 2022
Demystifying gRPC-web, all gRPC-web requests are sent using POST, which means all read requests cannot be cached by the browser/CDN, which has a significant impact on performance. Refer to https://github.com/grpc/grpc/issues/7945 Currently, it seems that wrapping a layer on the client side, recognizing the request body and caching can alleviate this, but after all, it is not a native implementation of the HTTP protocol, and it is not elegant. However, generating Client and Server Stub based on IDL is really cool, and it seems that I will continue to use gRPC web for now.
April 21, 2022
I've tried out arctype, another fundamental tool based on the "collaboration + X" concept. As a local SQL Client, its quality is quite impressive. It responds faster than DataGrip and has a better interface than Sequel Ace. In addition, arctype also supports direct connection to PlanetScale databases (without the need for local port forwarding), which is very user-friendly for PlanetScale users (including myself).
April 21, 2022
Struggling with using ffmpeg on Vercel Serverless functions: - Unable to pre-install ffmpeg (configuring lambda layer seems troublesome) - Limit on the size of the compiled product, after compression (tar.gz), it needs to be controlled within 50MB In Node, you can use ffmpeg-static, but there is no good solution for Go. So recently I encapsulated a library github.com/go-ffstatic/ffstatic. A solution similar to ffmpeg-staticβ€”β€”β€”the library contains the complete ffmpeg executable. However, to make it easier to use, I directly embed the entire ffmpeg into the Go compiled product and export them to the tmp directory when starting. However, there are still some minor issues. The x64 versions of ffmpeg and ffprobe in ffmpeg 4.x are both 70+MB. When packaged together, the compressed size is still 50+MB, exceeding the limit of Vercel. It's unlikely to compile a streamlined ffmpeg by myself. Currently considering using ffmpeg 3.x version to reduce the size. In addition, you can also consider whether to package ffprobe through compilation parameters.
April 21, 2022
I studied the Go generic tool library github.com/samber/lo, and saw this piece of code... It's fair to say I was a bit stunned. This is why so many people were against adding generics to Go before.
April 21, 2022