I was reading The Impact of Pre-allocating Slice Memory on Performance in Golang and was curious about the benchmark, so I wanted to try it too for Autograd.
Slice preallocation means we specify the len when creating the slice and it will create a slice with n-empty items on it. Then, to assign a value to the slice, we need to use the index.
Example of prealloc a userIDs
userIDs := make([]uuid.UUID, len(assignments))
for i, assignment := range assignments {
userIDs[i] = assignment.AssignedBy
}
In Autograd, there is a page where the admin can see a list of assignments. This page typically contains 10-20 assignments with pagination. Then, I was curious if there is an impact if we preallocate for this slice of struct, since in the article, the writer didn’t explain the struct layout. In the struct I tested, it has 8 fields, and 3 of them are also structs. In my honest opinion, this struct can represent many backend service codes.
var asg = assignments.Assignment{
ID: uuid.New(),
Name: "Assignment 1",
Description: "Description 1",
DeadlineAt: time.Now(),
Assigner: assignments.Assigner{
ID: uuid.New(),
Name: "Assigner 1",
Active: true,
},
CaseInputFile: assignments.CaseFile{
ID: uuid.New(),
URL: "http://example.com",
Type: "input",
TimestampMetadata: core.TimestampMetadata{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
DeletedAt: null.NewTime(time.Now(), false),
},
},
CaseOutputFile: assignments.CaseFile{
ID: uuid.New(),
URL: "http://example.com",
Type: "input",
TimestampMetadata: core.TimestampMetadata{
CreatedAt: time.Now(),
UpdatedAt: time.Now(),
DeletedAt: null.NewTime(time.Now(), false),
},
},
TimestampMetadata: core.NewTimestampMeta(time.Time{}),
}
For the benchmark, I want to check for lengths of 10, 20, and 50 slices to represent a real scenario. The result is interesting, where the preallocation has a big performance boost of 1000x! You can check the benchmark here.
go test -benchmem -bench . github.com/fahmifan/autograd/pkg/core/assignments
goos: darwin
goarch: arm64
pkg: github.com/fahmifan/autograd/pkg/core/assignments
BenchmarkAssignmentNoPrealloc_10-8 363208 3327 ns/op 15808 B/op 5 allocs/op
BenchmarkAssignmentPrealloc_10-8 165713173 7.198 ns/op 0 B/op 0 allocs/op
BenchmarkAssignmentNoPrealloc_20-8 177516 6080 ns/op 32192 B/op 6 allocs/op
BenchmarkAssignmentPrealloc_20-8 96115022 12.32 ns/op 0 B/op 0 allocs/op
BenchmarkAssignmentNoPrealloc_50-8 99542 12601 ns/op 64960 B/op 7 allocs/op
BenchmarkAssignmentPrealloc_50-8 43292256 27.91 ns/op 0 B/op 0 allocs/op
PASS
ok github.com/fahmifan/autograd/pkg/core/assignments 8.980s
So, it's highly recommended to always preallocate a slice if we know the size ahead of time. The overhead is small, but the performance boost is significant.
]]>After Migrating My site to 11ty, I decided to self-deploy the comment & analytic app for this website. Why? Because it's fun to learn something new and I'd like to keep those data on my box. So, these are my review for comments & analytics apps that you can self-deploy.
So, I begin by searching for the comment app. I want it to be lightweight so it won't take many resources on my cheap server. I already know about Commento, so I search for "Commento alternative open-source", these the top result list:
Commento is widely popular, actively maintained, has a nice UI, has a managed cloud version & privacy compliant. The backend is written in Go, which is nice. But the managed version price is quite expensive for me & the DB choice is only Postgres.
Utterances is popular too, it uses Github issues as its backend, it's open-source, & privacy compliant. But you need to have a Github account to add a comment. So, It is not good for the general audience.
Isso is also popular, actively maintained, & privacy compliant. The backend is written in Python, and uses SQLite for the DB. The UI is ok, I just need to style the font to match my website style.
For the comment app, I choose Isso for its simplicity & SQLite choice as DB. The deployment is quite easy, it comes with a prebuilt binary. So, I just deployed it as systemd service and put it behind a reverse proxy.
For analytics I already have some choices, they are:
Umami is popular & widely used, actively maintain, feature-rich & privacy compliant. It is written in NextJS and Prism. The UI is nice & the DB choices are Postgres or MySQL.
Plausible is quite the same as Umami, but it offered a SaaS version. It is written in Elixir, has modern UI, and It should be very performant. The DB choices are Clickhouse or Postgres.
Fathom has two versions the lite and pro. The lite version is the old Fathom, rarely maintained, written in Go and Preact, and the DB choices are SQLite or Postgres. The pro version is a SaaS & has more features compared to the lite version.
I also found GoAccess a server-side tracker. Written in C (super fast!), feature-rich, & privacy compliant.
For the analytic, I choose Fathom Lite. Even if it's not actively maintained anymore, I'll still be able to maintain it to some extent. Yeah, I forked it, upgrade the Go version, and fix some accessibility issues as suggested by the Chrome Lighthouse.
Deployment is easy because it is single binary & uses SQLite. I just make it as systemd service and put it behind a reverse proxy.
]]>So, I migrate my blog to 11ty using Hylia Starter Kit. The reason is that it is more flexible and easier to customize.
My old Hugo blog was using template too but, it using the git submodule. Which is not realy easy to customize. Well Hugo is faster, but I don't think I'll need it for a small blog site like this.
The migration process is quite simple. First I degit
the template using npx degit github.com/hankchizljaw/hylia
. Then I change the site metadata like site title, author, email, favicon, etc. Then, I copy the markdown blog contents from Hugo to 11ty. Also, i decided to include the images in the blog repo, before this I store the images in a separate Github repo.
Deployment is the same, first I added a new submodule of my github pages into the 11ty blog repo. Build 11ty blog in production mode, copy the dist
into the submodule, and pushed it. Then, the new site will be deployed by Github. The script looks like this
rm -rf ./dist/*
npm run production
cp -r ./dist/* ./thesubmodule
# Go To the submodule folder
cd thesubmodule
# Stage changes to git.
git add .
# Commit
msg="rebuilding site `date`"
if [ $# -eq 1 ]
then msg="$1"
fi
git commit -m "$msg"
# Push source and build repos.
git push origin master
# Come Back up to the Project Root
cd ..
git add thesubmodule/
git commit -m "rebuild site `date`"
git push origin master
That's it how I migrate from Hugo to 11ty. Here is the lighthouse audit result
]]>Following the installation guides, voila! enjoy your pen tablet on Linux.
]]>A cache is a way to store data that accessed frequently and needs to be fast. We can use cache to store a result from computation or result of SQL query. A cache is usually stored in memory with a key-value style to make sure it fast to store item and access.
One of the cache algorithms is LRU or Least Recently Used. LRU will limit the memory usage by gives maximum items that can be stored. When there is a new item to be store and it already reached the limit, it discards the least used item.
TLDR
Check the full code in this repo
There are two main components in LRU cache, those are Queue
and Hash Map
. The Queue
is used to store the items that implemented in a linked list, while the Hash Map
is used to make the complexity O(1)
when accessed.
Disclaimer
The queue I implemented is based on my opinion, it may not be the "right" one :)
We need to create a struct for the cache item, a linked list node, and the queue.
type Queue struct {
head *Node
tail *Node
}
type Node struct {
item Item
next *Node
prev *Node
}
type Item struct {
Key string
// Value is used to store an item
Value interface{}
}
We will create three methods for the queue InsertFirst
, RemoveLast
, and RemoveNode
.
The structure of the Node. It has three parts, prev
, value
, and next
. The prev
and next
are pointers to an adjacent node. The value
is an interface{}
that can hold any data type.
These are algorithm and code for InsertFirst
// insert a node into the first of the queue
func (q *Queue) InsertFirst(newHead *Node) {
if q.isEmpty() {
q.head = newHead
q.tail = newHead
return
}
oldHead := q.head
newHead.next = oldHead
oldHead.prev = newHead
q.head = newHead
}
These are algorithm and code for RemoveLast
// remove a node from the last queue
func (q *Queue) RemoveLast() *Node {
if q.isEmpty() {
return nil
}
if q.isOne() {
last := q.tail
q.tail = nil
q.head = nil
last.breakLinks()
return last
}
oldLast := q.tail
newLast := q.tail.prev
q.tail = newLast
oldLast.breakLinks()
return oldLast
}
These are algorithm and code for RemoveNode
// remove a node from any position in the queue
func (q *Queue) RemoveNode(node *Node) {
if q.isEmpty() {
return
}
if q.isOne() {
q.head.breakLinks()
q.tail.breakLinks()
node.breakLinks()
return
}
// node is first in the queue with following N-nodes
if node == q.head {
// new head is the next in the queue
q.head = node.next
node.breakLinks()
return
}
// node is the last in the queue with previos N-nodes
if node == q.tail {
// new tail is the one before the node
q.tail = node.prev
node.breakLinks()
return
}
// node is in the middle of the queue
after := node.next
before := node.prev
// link the before & after
before.next = after
after.prev = before
node.breakLinks()
}
The code for MoveToFirst
func (q *Queue) MoveToFirst(node *Node) {
// no need to move, there is one or none in the queue
if q.isEmpty() || q.isOne() {
return
}
if q.head == node {
return
}
if q.tail == node {
beforeTail := node.prev
q.tail = beforeTail
beforeTail.next = nil
node.breakLinks()
node.next = q.head
q.head = node
return
}
nodeBefore := node.prev
nodeAfter := node.next
nodeBefore.next = nodeAfter
nodeAfter.prev = nodeBefore
node.breakLinks()
node.next = q.head
q.head = node
}
Helper methods for the Queue
func (q *Queue) isEmpty() bool {
return q.head == nil && q.tail == nil
}
func (q *Queue) isOne() bool {
return q.head != nil && q.head.next == nil
}
The breakLinks
method is implemented as follows
// set next & prev to nil
func (n *Node) breakLinks() {
if n == nil {
return
}
n.next = nil
n.prev = nil
}
// LRUCacher not concurrent safe
type LRUCacher struct {
queue *Queue
hash map[string]*Node
MaxSize int
count int
}
Codes for Put
// Put set new or replace existing item
func (l *LRUCacher) Put(key string, value interface{}) {
if l.MaxSize < 1 {
l.MaxSize = DefaultMaxSize
}
if l.queue == nil {
l.queue = NewQueue()
}
if l.hash == nil {
l.hash = make(map[string]*Node)
}
item := Item{
Key: key,
Value: value,
}
// if key already exist just replace the cache item
oldNode, ok := l.hash[key]
if ok {
oldNode.item = item
return
}
node := &Node{item: item}
if l.queueIsFull() {
last := l.queue.RemoveLast()
l.removeItem(last.item)
l.hash[key] = node
l.queue.InsertFirst(node)
return
}
l.hash[key] = node
l.queue.InsertFirst(node)
l.count++
}
The recently called item, will be move to the first of the queue
func (l *LRUCacher) Get(key string) interface{} {
if l.hash == nil {
return nil
}
val, ok := l.hash[key]
if !ok {
return nil
}
l.queue.MoveToFirst(val)
return val.item.Value
}
Codes for Del
func (l *LRUCacher) Del(key string) interface{} {
node, ok := l.hash[key]
if !ok {
return nil
}
l.queue.RemoveNode(node)
l.removeItem(node.item)
l.count--
return node.item.Value
}
The previous codes work for non-concurrent usage because when accessing & writing to the hash map or queue, there are needs for lock and synchronization. Also keep in mind, that adding synchronization will impact the performance.
We can use a mutex
for synchronization. In Go, there are two types of mutex, Mutex
and RWMutex
. The Mutex
is general purpose for locking only one goroutine that has access to a resource. The RWMutex
has two locking mechanisms. The first is RLock
that can behold by multiple gorutines and is used for reading. The Second is a Lock
that can only behold by one goroutine and is used for writing.
I use two mutexes for LRUCacher
, hashMutex
for access & mutating hash
, and countMutex
when mutating the count
. Also, to help to detects race condition, I use -race
flag when running the go test
go test -race ./...
The rest of the code can be checked in this repo lrucache
type LRUCacher struct {
maxSize int64
queue *Queue
count int64
countMutex sync.RWMutex
hash map[string]*Node
hashMutex sync.RWMutex
}
The benchmark
go test -benchmem -run=^$ -bench ^(BenchmarkLRUCacher)$ github.com/fahmifan/lrucache
goos: linux
goarch: amd64
pkg: github.com/fahmifan/lrucache
cpu: Intel(R) Core(TM) i5-7400 CPU @ 3.00GHz
BenchmarkLRUCacher/Put-4 2813918 422.4 ns/op 89 B/op 4 allocs/op
BenchmarkLRUCacher/Get-4 9076047 131.4 ns/op 16 B/op 1 allocs/op
BenchmarkLRUCacher/Del-4 11179544 107.6 ns/op 12 B/op 1 allocs/op
PASS
ok github.com/fahmifan/lrucache 4.228s
That's the LRU Cache and how you can implement it in Go :)
]]>➜ uname -v -r
5.8.0-55-generic #62~20.04.1-Ubuntu SMP Wed Jun 2 08:55:04 UTC 2021
So i decided to bought this "ORICO BTA508 USB Bluetooth 5.0 Dongle BTA 508 BTA-508 Adapter - Hitam" from Tokopedia. But, the only official driver/firmware is for Windows.
Then, how about Linux ? To solve this question, we need to know which chip the bluetooth uses and install a custom firmware/driver.
Use this command to check what chips the bluetooth adapter use
dmesg | grep -i Bluetooth
The output should looks like this
[ 1.810236] usb 1-4.2: Product: Bluetooth Radio
...
[ 3.546498] Bluetooth: hci0: RTL: loading rtl_bt/rtl8761b_fw.bin
[ 3.551034] Bluetooth: hci0: RTL: loading rtl_bt/rtl8761b_config.bin
...
This line hci0: RTL: loading rtl_bt/rtl8761b_fw.bin
shows us, that the chip is rtl8761b
and it need rtl8761b_fw
firmware/driver.
So i google rtl8761b_fw
and found an article from raspberry forum with the same question.
But, to be sure i look up to it once again and found this arch linux post. After i read from arch linux post, i decided to follow it. So, i download the driver from the source mention in the article and follow the steps from the raspberry forum article as follows:
sudo cp -iv 20201202_LINUX_BT_DRIVER/rtkbt-firmware/lib/firmware/rtlbt/rtl8761b_fw /lib/firmware/rtl_bt/rtl8761b_fw.bin
sudo cp -iv 20201202_LINUX_BT_DRIVER/rtkbt-firmware/lib/firmware/rtlbt/rtl8761b_config /lib/firmware/rtl_bt/rtl8761b_config.bin
Thats it, when i check on bluetooth manager, it works
]]>Steps:
mkdir -p ~/admin/caddy
sudo setcap cap_net_bind_service=+ep ./caddy
Caddyfile
in ~/admin/caddy
. A simple reverse proxy exampleexample.com {
reverse_proxy localhost:8000
}
~/.config/systemd/user/caddy.service
[Unit]
Description=Caddy Web Server
After=network.target
[Service]
Type=simple
Restart=on-failure
RestartSec=10
ExecStart=/home/deployr/admin/caddy/caddy run
WorkingDirectory=/home/deployr/admin/caddy
LimitNOFILE=4096
PIDFile=/var/run/caddy/caddy.pid
[Install]
WantedBy=default.target
enable linger
to start the service as user on boot. Login as root
then runloginctl enable-linger deployr
systemctl --user daemon-reload
systemctl --user enable caddy.service
systemctl --user start caddy.service
systemctl --user status caddy.service
journalctl
journalctl -f --user-unit=caddy
deployr
Steps:
root
ssh root@IP
deployr
with sudo access
adduser deployr
usermod -aG admin deployr
ufw app list
, harusnya cuma ada OpenSSHufw enable
, aktifkan ufwufw status
deployr
root
, tinggal pakai rsyncroot
di vpsrsync --archive --chown=deployr:deployr ~/.ssh /home/deployr
Internet might be the source of this. Why ? Open internet, bring massive, free, and accessible information into socity. Social media, a decendant of internet known as "rich" in information, such as Twitter, Instagram, Tiktok, Youtube. Game, richer than those socmeds combine audio, visual, story telling and interactivity.
So, what is the root that cause this "Information Overflow" ?
Is it our behaviour that keep staring at our screen ? Is it the context switching? Socmed has many different context between it tweets. Is it creates low quality in absorbing information ? Is it creating the FOMO ? Do we actually have this "quality" ?
My hunch is yes. Actullay, i barely remember the tweets that i have read. When i found an interesting articel/tweet i save it into "getpocket" app. But, its unlikely that i'll visit them in getpocket. This might be the result of FOMO.
In Software Engineering, there is a phrase You Aren't Gonna Need It or Yagni. It means, you rarely need something that beyond scope. Now, how to identify if something categorized as Yagni ? If it less or insignificant, leave it, put it into Yagni. I think, information around internet is the same, the level of significant will weigh them to falls into Yagni or not.
But, how to make the information a Yagni or not for us, if we have not read or saw it ? I don't know yet. May be, we can hide them behind a software. It is like someone hand picked information for us in digestable size.
]]>Jika digambarkan event bus itu berbentuk seperti gambar berikut
Dimana ada publisher yang dapat mengirim sebuah pesan ke Event Bus dan pesan ini dapat didapatkan oleh banyak subscriber.
Kita akan coba menggunakan event bus dalam program yang akan kita buat. Dan kita akan membuatnya menggunakan bahasa Go atau Golang.
Jika kalian belum tahu tentang bahasa Golang, bisa cek Dasar Pemrograman Golang.
Yang akan kita jadikan kasus adalah order barang dan payment pada sebuah Online shop. Ketika user membuat sebuah order maka akan dibuat sebuah payment.
+-------------+
|create order +----------+
+-------------+ |
+----v----+
| |
| BUS |
| |
+----^----+
|
|
+--------------+ |
|create payment+---------+
+--------------+
Oke, pertama kita perlu melakukan ini project. Buat folder project kalian, lalu init module dengan mengetikkan command go mod init shop
di folder project. Untuk mempersingkat, saya akan namai module ini sebagai shop
.
Versi Go yang digunakan pada saat pembuatan artikel ini adalah
go1.12.17
.
Selanjutnya kita akan buat model nya terlebih dahulu. Buat package model
, lalu buat file model.go
.
├── model
│ └── model.go
Di package model ini, kita membuat tiga buah struct
yaitu Product
, Order
, dan Payment
.
Sebuah Order dapat memiliki banyak product di dalam nya.
package model
import "fmt"
type Product struct {
ID int64
Price float64
}
type Order struct {
ID int64
ProductIDs []int64
}
Lalu sebuah Payment akan memiliki OrderID berikut PaymentStatus
nya.
PaymentStatus
ini bisa dibilang adalah sebuah "enum", yang memiliki tiga tipe yaitu pending
, paid
dan canceled
.
type Payment struct {
ID int64
OrderID int64
Status PaymentStatus
}
type PaymentStatus int
// PaymentStatus enum
const (
PaymentStatusPending = PaymentStatus(1)
PaymentStatusPaid = PaymentStatus(2)
PaymentStatusCanceled = PaymentStatus(3)
)
Selanjutnya, kita akan membuat package service
. Ada tiga buah service yang dibuat yaitu ProductService
, OrderService
dan PaymentService
yang semuanya merupakan interface
.
└── service
└── service.go
ProductService
akan memiliki method yaitu List
. Lalu, OrderService
memiliki method CreateOrder
. Terkahir, PaymentService
akan memiliki method CreatePayment
.
package service
import "shop/model"
type (
ProductService interface {
List() []model.Product
}
OrderService interface {
CreateOrder(productIDs []int64) *model.Order
}
PaymentService interface {
CreatePayment(orderID int64) *model.Payment
}
)
Selanjutnya kita perlu melakukan implementasi dari interface
tersebut dengan struct
.
Pada package service
buat file product_service
.
└── service
├── product_service.go
Untuk mempersingkat, data product akan kita simpan di dalam field products
. Lalu, dengan fungsi NewProductService
kita melakukan instansiasi productService
sekaligus mengisi field products
dengan data dummy.
package service
import (
"shop/model"
"time"
)
type productService struct {
products []model.Product
)
func NewProductService() ProductService {
return &productService{
products: []model.Product{
model.Product{ID: 111, Price: 100.0},
model.Product{ID: 112, Price: 200.0},
model.Product{ID: 113, Price: 300.0},
},
}
}
func (ps *productService) List() []model.Product {
return ps.products
}
Lalu, buat file order_service
pada package service
.
└── service
├── order_service.go
Pada kode ini, terdapat field bus
dengan tipe *bus.Bus
yang digunakan untuk mempublish event/topic. pacakge yang digunakan adalah github.com/mustafaturan/bus
. Argumen ke dua pada fungsi Emit
adalah nama topic yg dipublish, di sini kita gunakan nama order.created
.
package service
import (
"context"
"time"
"shop/eventbus"
"shop/model"
"github.com/mustafaturan/bus"
log "github.com/sirupsen/logrus"
)
type orderService struct {
bus *bus.Bus
productService ProductService
}
func NewOrderService(ps ProductService, bus *bus.Bus) OrderService {
return &orderService{
productService: ps,
bus: bus,
}
}
func (o *orderService) CreateOrder(productIDs []int64) *model.Order {
order := &model.Order{
ID: time.Now().UnixNano(),
ProductIDs: productIDs,
}
log.Info("create order, productIDs: ", productIDs)
// kita publish atau emit "order.created"
event, err := o.bus.Emit(context.Background(), "order.created", *order)
if err != nil {
log.Error(err)
return
}
return order
}
Service yang dibuat selanjutnya adalah payment_service
. Pada service ini, terdapat satu method CreatePayment
.
package service
import (
"shop/model"
"time"
)
type (
paymentService struct {
orderService OrderService
}
)
func NewPaymentService(os OrderService) PaymentService {
return &paymentService{
orderService: os,
}
}
func (ps *paymentService) CreatePayment(orderID int64) *model.Payment {
return &model.Payment{
ID: time.Now().UnixNano(),
OrderID: orderID,
Status: model.PaymentStatusPending,
}
}
Kemudian, kita akan membuat instansiasi pacakge bus
. Fungsi constructor ini saya ambil dari contoh yang diberikan di repo github.com/mustafaturan/bus
.
package eventbus
import (
"github.com/mustafaturan/bus"
"github.com/mustafaturan/monoton"
"github.com/mustafaturan/monoton/sequencer"
)
func NewBus() *bus.Bus {
// configure id generator (it doesn't have to be monoton)
node := uint64(1)
initialTime := uint64(1577865600000) // set 2020-01-01 PST as initial time
m, err := monoton.New(sequencer.NewMillisecond(), node, initialTime)
if err != nil {
log.Fatal(err)
}
// init an id generator
var idGenerator bus.Next = (*m).Next
// create a new bus instance
b, err := bus.NewBus(idGenerator)
if err != nil {
log.Fatal(err)
}
return b
}
Oke, selanjutnya kita akan membuat package eventhandler
.
├── eventhandler
│ └── handler.go
Event bus ini memiliki sebuah handler yg berupa fungsi. Handler ini gunanya untuk menerima event-event yang diemit ke dalam event bus. Dari event yang diterima kita dapat mengecek jenis topic-nya.
package eventhandler
import (
"shop/eventbus"
"shop/model"
"shop/service"
"github.com/mustafaturan/bus"
log "github.com/sirupsen/logrus"
)
type EventHandler struct {
PaymentService service.PaymentService
}
func (e *EventHandler) HandleOrder(event *bus.Event) {
switch event.Topic {
case "order.created":
log.Infof("recieved event %v", event.ID)
order, ok := event.Data.(model.Order)
if !ok {
return
}
payment := e.PaymentService.CreatePayment(order.ID)
log.Info("create payment", payment)
}
}
Sekarang kita akan buat fungsi main, di sini kita akan melakukan wiring service-service yang sudah dibuat.
package main
import (
"os"
"os/signal"
"syscall"
"shop/eventbus"
"shop/eventhandler"
"shop/service"
"github.com/mustafaturan/bus"
log "github.com/sirupsen/logrus"
)
func main() {
handler := &eventhandler.EventHandler{}
bbus := eventbus.NewBus()
bbus.RegisterTopics([]string{"order.created"})
bbus.RegisterHandler("order-channel", &bus.Handler{
Matcher: "order.*", // match untuk semua order
Handle: handler.HandleOrder,
})
productService := service.NewProductService()
orderService := service.NewOrderService(productService, bbus)
paymentSerivce := service.NewPaymentService(orderService)
handler.PaymentService = paymentSerivce
products := productService.List()
orderService.CreateOrder([]int64{products[0].ID})
// kode berikut untuk memblok goroutine utama
sigCh := make(chan os.Signal)
done := make(chan bool)
signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
go func() {
<-sigCh
log.Info("exiting...")
done <- true
}()
<-done
}
Ketika dijalankan maka output dari program akan seperti ini
INFO[0000] create order, productIDs: [1586206522725414000]
INFO[0000] recieved event 0096Tf1h00000001
INFO[0000] create payment{id: 1586206522725562000, order_id: 1586206522725417000, status: pending}
^CINFO[0001] exiting...
0096Tf1h00000001
merupakan id dari event yang diemit, dari urutan log yg muncul.
Jadi, begitulah cara kerja dan penggunaan event bus. Mungkin, di artikel selanjutnya akan dibahas implementasi event bus pada sebuah web service.
]]>Salah satu pattern umum untuk menggunakan worker adalah ketika ada request ke web servis, maka akan di-reply langsung, lalu akan dijadawlkan task dari request tersebut ke worker. Misalkan kita mendapat requirement sbb:
Ketika user mengupload gambar, resize terlebih dahulu baru simpan ke storage
Dari requirement tersebut dapat dibagi menjadi:
Dengan menggunakan worker aplikasi menjadi lebih cepat dalam memberikan balikan ke user & mengurangi long running request yg menyebabkan bloking.
]]>Dalam membuat software, harus mengerti dulu apa yang dibutuhkan oleh suatu bisnis atau pekerjaan. Untuk dapat membuat software yang tepat guna, diperlukan kolaborasi antara engineer dan stake holder. Kunci dari kolaborasi yang apik adalah komunikasi yang dapat saling dipahami. DDD atau Domain Driven Design, sebuah prinsip pembuatan software dari tahap desain sampai development yang berkolaborasi dengan domain expert (ahli). Tanpa penguasaan yang mendalam tentang suatu permasalahan, maka sulit untuk mengatakan solusi yang dibuat dapat tepat guna. Di sini lah peran seorang domain expert menjadi penting.
Ubiquitous Language, bahasa yang dapat dimengerti oleh semua orang. Memadukan antara bahasa teknis dengan jargon-jargon di dunia bisnis. Dapat dikatakan juga, membuat desain software yang dimengerti oleh domain expert, jika perlu sampai ke level abstraksi kodingnya.
Dari permasalahan yang sudah dipahami, perlu dibuat permodelan yang dapat diimplementasikan oleh developer. Beberapa pondasi dari pembuatan model ini adalah:
Memisahkan bagian-bagian dalam software sesuai dengan peruntukannya. Jika pernah membuat sebuah aplikasi website dinamis, tak jarang berjumpa dengan konsep MVC atau Model View Controller yang merupakan sebuah arsitektur software. Berbeda dengan MVC, Layered Architecture ini lebih kepada mengisolasi kode logik bisnis yang inti dari dependensi luar. Dependensi ini dapat berupa akses ke database, user interface (API), pustaka pihak ketiga, dsb. Bagian yang diisolasi ini disebut sebagai layer aplikasi.
Dalam membuat kodingan, sering kali solusi pertama yang dibuat kurang bagus. Baik dari segi efektif nya atau kejelasan maksud kodingan. Untuk itu, refactoring berkelanjutan menjadi hal yang sering dan perlu dilakukan.
Bagian ini belum penulis baca dan pahami dan akan diupdate di kesempatan berikutnya
Dari tulisan pendek ini, semoga dapat memicu sedikit rasa penasaran pembaca tentang membuat Software yang andal melalui DDD ini. Ini merupakan bagian satu, selanjutnya penulis akan coba praktik langsung pada proses pembuatan sebuah aplikasi.
Thanks for reading
]]>Setelah lulus dari program Google Developer Kejar (GDK), saya berkesempatan mengikuti Bekraf Developer Conference atau BDC dari Dicoding.
BDC kali ini diadakan kembali di Bandung, tepatnya di The Papandayan Hotel. Pada BDC tahun ini, terdapat dua track yaitu dari track dari Dicoding dan track dari Asosiasi Game Indonesia.
Track yg saya ikuti adalah Dicoding. Ada 4 sesi dalam track ini. Sesi pertama membahas tentang Hegemoni Jurusan IT. Sesi kedua tentang Mencetak Talenta Digital dari Kampus. Sesi ketiga tentang Melahirkan Mutiara Digital dari Komunitas. Sesi keempat tentang Tantangan Membangun Aplikasi.
Jurusan IT di kampus bukan menjadi satu-satunya tempat belajar menjadi softawre developer, tapi banyak tempat non-formal yg menyediakan pelatihan baik online maupun offline. Bahkan seseorang driver gojek tanpa pendidikan formal bisa menjadi mendapatkan pekerjaan sebagai software developer setelah mengikuti pelatihan.
Selain itu, software developer juga tidak hanya berasal dari jurusan IT, bahkan dari jurusan lain pun selama memiliki kemampuan bisa memiliki pekerjaan sebagai sebagai software developer.
Sesi ini diisi oleh para dosen jurusan IT. Salah satunya adalah Dr. Inggriani Liem mantan dosen di STEI ITB. Beliau menjelaskan tentang bagaimana menjadi seorang Software Developer yang sukses. Yang dapat saya rangkum adalah: Pelajari Computational Thinking, Algoritma-Struktur Data, dan Fungsional & Object Oriented Paradigm.
Selain itu, dosen lain juga menjelaskan kondisi mahasiswa mereka di kampus. Menurut salah satu dosen, mahsiswanya itu kepingin yang instant-instant, contohnya dikasih tugas pingin yang mudah dan cepat jadi. Lalu, dosen yang lain menjelaskan bahwa mahasiswanya itu kurang minat dalam mempelajari kuliahnya, sehingga beliau sering memberikan motivasi.
Pada sesi ini, datang salah satu murid Dr. Inggriani, beliau menanyakan apakah mungkin mempersingkat waktu mempelajari konsep dasar pemrogramana sampai 3 bulan. Menurut Dr. Inggriani, ini bisa dilakukan asal, lingkungannya sudah mendukung untuk computational thinking. Karena, kendala di Indonesia masih ada saja daerah yang belum memanfaatkan penuh teknologi komputer ini.
Di sesi ini, hadir penggiat komunitas dari GDE Android Indonesia, Lead Jakarta JS, Software Engineer LINE sekaligus dosen Binus, dan co-founder Codepolitan. Mereka bercerita bagaimana mereka tumbuh dari komunitas, peranan komunitas dalam memajukan SDM Software Developer di daerah, bagaimana komunitas bisa membantu LINE dalam mengenalkan fitur API LINE ke developer di Indonesia.
WIP ...
WIP ...
]]>youtube-dl -i --extract-audio --audio-format mp3 --audio-quality 0 <url>
if you using zsh
or fish
wrap the url using quotes 'url'
Happy downloading
]]>In this first talk they told us what is A/B testing, how to do it, and the technical part on do A/B testing using the Firebase Remote Config.
A/B Testing is used when you want to know what design that works for your users. It can be different design of button, checkout page, color scheme, fonts, ect.
First, you need to set your goal. What do you want to get from this testing.
Let's say you have an online news/magzine. Your users can read a part of your article, but to read the full article they need to sign in/up
. So you want to increase the sign in/up
rates from this.
Now you make a hypothesis, the user will sign in if i make the button in capsule instead of rectangle shape.
Then you make two version of it and send the new version you want to test to your sample users.
Then after you collect the data about your test. You can decide now, if the change on the button affect the sign in/up
rates to your app.
"At least I've been trying hard"
That was my motivation to prepared for this online test and keep trying to solve excercises. Just try it and feel the struggle, but when you solve it you will get that eureka moment. Even if i can't solve it, at least i've been trying hard. I hope this test get a good result, and i can go for interview.
If you want to try or learn algorithm/competitive programming try these
I used codility to excercise since the online test use codility.
Thank you :D
]]>I'd like to post my journey of making this apps in future posts. Well, my english is not that good, so i might use Bahasa Indonesia if it's too difficullt for me to explain it in english. Hahaha!
]]>submodules
. This is the first time i used git submodules
and it was like wasting my time. But i just too curious, and end up hours of googling, but in the end i get what i want.
]]>wkhtmltoimage --images --javascript-delay 5000 http://localhost:5500/ testcv2.png
I'm using local server for this and code in vscode :D
]]>